manton
manton

Strange juxtaposition with the war in Iran happening at the same time as the controversy around Anthropic, OpenAI, and the Pentagon. OpenAI has a post with the details of their agreement, which also includes red lines on mass surveillance and autonomous weapons:

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.

I’m confused about how Anthropic’s proposed contract differs from the contract that OpenAI has shared. If the Pentagon offered the same agreement to Anthropic, would they accept it? Lots of questions.

|
Embed
Progress spinner
carljonard@mastodon.social
carljonard@mastodon.social

@manton The provided contract language only says that the DoW can’t cross those red lines “where law, regulation, or Department policy” says that they can’t.

That sounds a lot like “for all lawful purposes” with more words to me, which sounds a lot like “You’re not allowed to do things if you agree that you’re already not allowed to do them,” which is basically no restriction at all.

|
Embed
Progress spinner
billseitz@toolsforthought.social
billseitz@toolsforthought.social

@manton x.com/i/status/202804871436825

|
Embed
Progress spinner
manton
manton

@carljonard My reading is that OpenAI thinks the combination of existing rules (e.g. 3000.09) plus OpenAI’s safety checks will prevent autonomous use. That’s hardly no restriction. But we can’t compare with Anthropic’s language.

|
Embed
Progress spinner
zottmann.dev
zottmann.dev

@manton See bsky.app/profile/masn…

|
Embed
Progress spinner
czottmann@norden.social
czottmann@norden.social

@manton See bsky.app/profile/masnick.com/p for some insights.

|
Embed
Progress spinner
In reply to
manton
manton

@zottmann.dev Mike’s summary oversimplifies the tweets. A more accurate reading would be that for Anthropic we assume all the safety is in the contract, and for OpenAI it’s spread over contract and “technical safeguards”.

|
Embed
Progress spinner