Could this be the first real handrail of artificial intelligence?

Senators Josh Hawley and Richard Blimenthal are once again entering the light of AI headlights, this time with a law that aims to create a federal risk assessment program of advanced artificial interview systems.

According to AxiosThe Act on the risk assessment of artificial intelligence He will create a program in the Energy Department to collect data on potential AI disasters – think of dishonest systems, security violations or weapons by opponents.

Sounds almost like science fiction, but the fears are too real.

Here is Kicker: programmers would be obliged to send their models to review before implementation.

This is a sharp contrast with the ordinary mantra “move quickly and break things”. Reminds me of how just a few months ago, California adopted a breakthrough law AI Focusing on consumer safety and transparency.

Both efforts indicate a broader traffic – they finally tighten the technology reins that runs before regulation.

What really hit me is how this bilateral push became. You might think that Hawley and Blimenthal will not agree much, but here they sing the same melody about the risk of AI.

And this is not their first rodeo; At the beginning of this year, they joined forces with a proposal to protect the creators of content before the replicas of their work generated by AI.

They apparently perceive AI as a double-edged sword equally possible to creativity and chaos.

But here it gets a mess. The White House signaled that excessive regulation can weaken the innovation and put the United States in the AI ​​race with China.

This draft-safy wag Snapdragon SummitWhere chip manufacturers flared up with laptops driven by AI and excited “Agentic AI”, as if it were another industrial revolution.

The world of technology is loading, and decision -makers are trying to catch up.

Here are my two cent: It is refreshing to see how legislators at least try to struggle with these questions before the disaster.

Sure, such bills will not fix everything and can even slow down some flashy implementation.

But can we really afford another “moment of social media” in which we are aware of the risk only after causing damage?

I would claim that widespread reason, as suggested by this proposal, is not about suffocating progress, but more about ensuring that progress will not come back to bite us.

So what next? If this project gains traction, we could see that the Energy Department has become an unexpected AI safety guard.

And if it leaves, well, the Silicon Valley gets a longer leash. Either way, one thing is clear: AI officially moved from technology blogs to the Senate floor and does not return.

LEAVE A REPLY

Please enter your comment!
Please enter your name here