News
A new study from the University of Pennsylvania finds AI models like GPT-4o can be persuaded to bypass safety rules using human psychological tactics, raising urgent concerns.
Homeowners worried about the next big earthquake might be able to get up to $3,000 to help prepare their homes for the big ...
Tech leaders at Lawrence Livermore and Oak Ridge detail how their facilities are embracing AI — while being realists about the technology’s limitations.
OpenAI leaders are encouraging their competitors to collaborate as Gen AI's safety is called into question. Credit: Jaque Silva / NurPhoto via This week, AI companies OpenAI and Anthropic published ...
Shelby was one of 34 schools to receive awards through the second round of the Career Technical Education Equipment Grant program.
Discover how Audi is transforming production with virtual PLCs, edge cloud, and scalable compute for smarter, AI-ready factories.
China’s ambitious plan to dramatically pull back from fossil fuels is perhaps most evident in the explosive growth of its nuclear energy program. The latest news suggests China may be tantalizingly ...
Anthropic’s responsible scaling policy (RSP), which Kaplan oversees, pledges that the lab will develop strict safety measures ...
Expansion in the Samutprakarn factory will boost the factory’s production capacity by 60 percent Latest technology and work environment standards will advance sustainability and safety efforts Expande ...
Workers in Los Alamos National Laboratory’s plutonium facility tracked contamination throughout a waste staging area, according to a Defense Nuclear Facilities Safety Board report this month, alarming ...
In an effort to set a new industry standard, OpenAI and Anthropic opened up their AI models for cross-lab safety testing.
Driver assist technology often works great in the lab and in safety tests, but too often it makes drivers crazy, causing them ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results