Large corporations don’t “beg” their employees to do anything. Unless “do as we say or you’re fired” is literally the new “beg”.
That, and not one of the quotes resembles anything close to begging. It’s kind of a hyperbolic article that seems to want to make something of a leaked email that doesn’t contain much to hang a story on. (Oh, wait a minute, it was Slack messages?! Yeah, skip this one and go on about your day.)
I mean, with all the discussion around layoffs, you shouldn't take your continued employment for granted, you're going to get cut regardless of utility or behavior. Given that you're probably getting fired anyway, pretty please don't leak secrets???
Every company by now should make a policy decision on whether they are going to use no AI or lots of AI.
If you are not giving clear guidance to employees you are getting the worst of both worlds - a few jokers broadcasting internal documents and source code out to every online service they can find, while most people hold back either due to caution or because they don't know what's available.
Amazon uses plenty of ML, and will be using more and more of it as time goes by. Witness Amazon Sagemaker.
And plenty of AWS services are already making use of ML, either Sagemaker itself, or other ML tooling.
EDIT: The key thing here is that it's AWS ML running on AWS infrastructure, not ChatGPT running on someone else's infrastructure somewhere else.
Surely if stuff looks like secrets that implies it’s on the open web of 2021 era that was used to train chatGPT. Not the current prompts etc being input?
On an unrelated note i see iPhones now automatically capitalise chatGPT like so. That must be a recent change
How wide scale can this even be? I haven't been able to access ChatGPT free version for days, and the paid version is not available. How many people are even using this?
I doubt it would be because they are using AI. It's because they are entering sensitive information into a textbox that is stored in OpenAI's database.
No DLP solution is going to detect someone entering prompts on their personal machine, and any decent infosec team is going to blacklist GPT domains at a Corp proxy or on endpoint agents.
Large corporations don’t “beg” their employees to do anything. Unless “do as we say or you’re fired” is literally the new “beg”.
That, and not one of the quotes resembles anything close to begging. It’s kind of a hyperbolic article that seems to want to make something of a leaked email that doesn’t contain much to hang a story on. (Oh, wait a minute, it was Slack messages?! Yeah, skip this one and go on about your day.)
I mean, with all the discussion around layoffs, you shouldn't take your continued employment for granted, you're going to get cut regardless of utility or behavior. Given that you're probably getting fired anyway, pretty please don't leak secrets???
Every company by now should make a policy decision on whether they are going to use no AI or lots of AI.
If you are not giving clear guidance to employees you are getting the worst of both worlds - a few jokers broadcasting internal documents and source code out to every online service they can find, while most people hold back either due to caution or because they don't know what's available.
Amazon uses plenty of ML, and will be using more and more of it as time goes by. Witness Amazon Sagemaker. And plenty of AWS services are already making use of ML, either Sagemaker itself, or other ML tooling.
EDIT: The key thing here is that it's AWS ML running on AWS infrastructure, not ChatGPT running on someone else's infrastructure somewhere else.
EDIT: Oops. My previous edit to the message above did actually go through.
Surely if stuff looks like secrets that implies it’s on the open web of 2021 era that was used to train chatGPT. Not the current prompts etc being input?
On an unrelated note i see iPhones now automatically capitalise chatGPT like so. That must be a recent change
How wide scale can this even be? I haven't been able to access ChatGPT free version for days, and the paid version is not available. How many people are even using this?
Awaiting the first fired-for-using-ai employment case. Not sure how that would go.
I doubt it would be because they are using AI. It's because they are entering sensitive information into a textbox that is stored in OpenAI's database.
No DLP solution is going to detect someone entering prompts on their personal machine, and any decent infosec team is going to blacklist GPT domains at a Corp proxy or on endpoint agents.