inspxtr@lemmy.worldtoAI@lemmy.ml•OpenAI now has 35 in-house lobbyists, and will have 50 by the end of the year.
4·
20 days agocare to elaborate on the possibilities of “really big” that you’re imagining?
care to elaborate on the possibilities of “really big” that you’re imagining?
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
If you’ve never worked before, this can be considered practice runs for the when you do.
Like one of the other commentors said, assume everything is accessible by Google and/or your university (and later, your boss, company, organization, …).
And not just you, but the people who interact with you through it. So that means you may be able to put up defenses, but if they don’t (and they most likely do not), the data that you interact with them would likely be accessible as well.
So here are some potential suggestions to minimize private-data access by Google/university while still being able to work with others (adjust things depending on your threat model of course):