Should I trust ChatGPT with my information?
Jack from Quora asks:
Should I trust ChatGPT with my information?
While OpenAI and other AI companies promise to not mishandle your info (or handle it without your implicit consent), it’s just much safer to not give these tools sensitive information.
I generally treat ChatGPT like a semi-public Slack channel or an unencrypted cloud-based note app.
It’s an incredible productivity tool, but the default setting is that OpenAI uses your conversations to train future models.
This means a human reviewer could eventually see your logs, or your data could theoretically influence a response for someone else down the road.
My rule of thumb for what to keep out: If you wouldn't want it appearing in a company-wide newsletter, don't put it in the prompt.
This includes the obvious stuff like Social Security numbers and bank details, but for professionals, the real risk is proprietary data.
Avoid pasting internal source code, unreleased business strategies, or specific client names.
If I’m drafting a sensitive email, I always swap real names for Client X or Person A before hitting enter.
Some specifics things to note:
Toggle off training: Go into your settings and turn off "Chat History & Training." This stops OpenAI from using your data to improve their models.
Temporary Chats: Use this mode for one-off tasks you don't want saved to your history at all.
Memory feature: Be aware that ChatGPT now remembers details across chats to be more helpful. If you’ve shared personal details in the past, it’s worth auditing your Memory settings occasionally to clear out anything that shouldn't be there.
In brief, the more sensitive the info you're using, the more cautious you should be.
These days its now possible to locally run AI models using things like Ollama.
Consider that option, since all the data is kept on your computer.
ready to work together?