Tuesday, March 31, 2026

‘It could go quite wrong’: ChatGPT inventor Sam Altman admits A.I. could cause ‘significant harm to the world’ as he testifies in front of Congress

Date:

OpenAI CEO Sam Altman spoke in front of Congress about the dangers of AI after his company’s ChatGPT exploded in popularity in just a few months

Related Article:

Sam Altman urged Congress Tuesday to establish regulations for artificial intelligence, admitting that the technology ‘could go quite wrong.’

Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape ‘human history’ for better or worse, likening it to either the printing press or the atomic bomb.

Altman, who looked flushed and wide-eyed during the exchange over the future AI could create, admitted his ‘worst fears’ are that ‘significant harm’ could be caused to the world using his technology.

‘If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,’ he continued.

Tuesday’s hearing is the first of a series intended to write rules for AI, which lawmakers said should have been done years ago.

Senator Richard Blumenthal, who presided over the hearing, said Congress failed to seize the moment with the birth of social media, allowing predators to harm children – but that moment has not passed with AI. 

San Francisco-based OpenAI rocketed to public attention after it released ChatGPT late last year. 

ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

Senator Josh Hawley said: ‘A year ago we couldn’t have had this discussion because this tech had not yet burst to public. 

‘But [this hearing] shows just how rapidly [AI] is changing and transforming our world.’

Tuesday’s hearing aimed not to control AI, but to start a discussion on how to make ChatGPT and other models transparent, ensure risks are disclosed and establish scorecards.

Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, ‘How I would open this hearing?’ 

The result was impressive, said Blumenthal, but he added: ‘What if I had asked it, and what if it had provided an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin´s leadership?’

Blumenthal said AI companies should be required to test their systems and disclose known risks before releasing them.

San Francisco-based OpenAI rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses
Altman told senators that generative AI could be a ‘printing press moment,’ but he is not blind to its fault, noting policymakers and industry leaders need to work together to ‘make it so’

Altman, who looked flushed and wide-eyed during the grilling over the future AI could create, admitted his ‘worst fears’ are causing ‘significant harm to the world’ using his technology.

‘If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,’ he continued.

One issue raised during the hearing has been discussed among the public – how AI will impact jobs.

‘The biggest nightmare is the looming industrial revolution of the displacement of workers.’ Blumenthal said in his opening statement.

Altman addressed this concern later in the hearing, stating he thinks the technology will ‘entirely automate away some jobs.’

While ChatGPT could eliminate jobs, Altman predicted, it will also create new ones ‘that we believe will be much better.’ 

‘I believe that there will be far greater jobs on the other side of this, and the jobs of today will get better,’ he said. 

I think it will entirely automate away some jobs, and it will create new ones that we believe will be much better.’

Five takeaways from OpenAI CEO’s  testimony in front of Congress

OpenAI CEO wants AI regulation

The purpose of the hearing was to discuss the dangers of AI and work with OpenAI on how to prevent harm to humanity.

‘GPT-4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability,’ Altman said. 

‘However, we think regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models 

He also said: ‘As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.’ 

Altman called for ‘a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards.’ 

AI’s Influence on the 2024 Election 

Lawmakers voiced concerns about the technology swaying people’s opinions and spreading false information. 

Altman said: ‘It’s one of my areas of greatest concerns — the more general capability of these models to manipulate, to persuade, to provide sort of one-on-one disinformation.’

Some jobs will transition away

‘ChatGPT4 will, I think, entirely automate away some jobs. And it will create new ones that we believe will be much better,’ Altman said. 

He told Congress to look at GPT4 ‘as a tool, not a creature’ that is ‘good at doing tasks, not jobs.’

 While Christina Montgomery, IBM’s chief privacy officer, had a different approach: ‘Some jobs will transition away.’

AI has the potential to be like the moment of the printing press or the atomic bomb.

Altman said: ‘We think it can be a printing press moment.’

AI is not social media

Lawmakers admitted they failed to regulate social media, allowing it to become a playground for predators and misinformation. 

‘This is not social media,’ Altman said. ‘This is different.’ 

Altman explained that ChatGPT-like systems are completely different from social media platforms and it is too early to discuss policies.

Montgomery agreed, urging lawmakers not to pump the breaks on AI development. 

‘The era of AI cannot be another era of ‘Move fast and break things,” she said.

 ‘But we don’t have to slam the brakes on innovations, either.’

‘There will be an impact on jobs. We try to be very clear about that,’ he said.

Also sitting in the court was Christina Montgomery, IBM’s chief privacy officer, who also admitted AI will change everyday jobs but will also create new ones.

‘I am a personal example of a job that didn’t exist [before AI],’ she said.

The 2024 election was also brought up during the hearing by Hawley, who fears AI can sway people’s opinions, and Senator Amy Klobuchar voiced concerns about it spreading misinformation.

Altman, however, agreed with their apprehensions about the future presidential election.

‘It’s one of my areas of greatest concerns — the more general capability of these models to manipulate, to persuade, to provide sort of one-on-one disinformation,’ said Altman.

The public uses ChatGPT to write research papers, books, news articles, emails and other text-based work, while many see it as a virtual assistant.

In its simplest form, AI is a field that combines computer science and robust datasets to enable problem-solving.

The technology allows machines to learn from experience, adjust to new inputs and perform human-like tasks.

The systems, which include machine learning and deep learning sub-fields, are comprised of AI algorithms that seek to create expert systems which make predictions or classifications based on input data.

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. 

Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem.

In 1970, MIT computer scientist Marvin Minsky told Life Magazine, ‘From three to eight years, we will have a machine with the general intelligence of an average human being.’

And while the timing of the prediction was off, the idea of AI having human intelligence is not.

ChatGPT is evidence of how fast the technology is growing.

Musk, Wozniak and other tech leaders are among the 1,120 people who have signed the open letter calling for an industry-wide pause on the current ‘dangerous race’ 

In just a few months, it has passed the bar exam with a higher score than 90 percent of humans who have taken it, and it achieved 60 percent accuracy on the US Medical Licensing Exam.

Tuesday’s hearing appears to make up for lawmakers’ failures with social media – getting control of the progress of AI before it is too large to contain.

Senators made it clear that they do not want industry leaders to pause development – something Elon Musk and other tech tycoons have been lobbying – but to continue their work responsibly.

Musk and more than 1,000 leading experts signed an open letter on The Future of Life Institute, calling for a pause on the ‘dangerous race’ to develop ChatGPT-like AI.

Kevin Baragona, whose name is penned in the letter, told DailyMail in March that ‘AI superintelligence is like the nuclear weapons of software.’

‘Many people have debated whether we should or shouldn’t continue to develop them,’ he continued.

Americans were wrestling with a similar idea while developing the weapon of mass destruction – at the time, it was dubbed ‘nuclear anxiety.’

‘It’s almost akin to a war between chimps and humans, Baragona, who signed the letter, told DailyMail

‘The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them.

‘If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it.’

TECH LEADERS’ PLEA TO STOP DANGEROUS AI: READ LETTER IN FULL

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.

As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that ‘At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.’ We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.

We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

Source: DailyMail, Reuters ( image), Foxnews

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

Shah says in Parliament – Modi government has ended Naxalism

Union Home Minister Amit Shah launched a major attack on the opposition regarding Naxalism. He said that tribals...

post office is just a click away,all the information can be found on Dak Seva app

Now the post office is just a click away. To make postal services even more convenient, the government...

Do you also see spots in your eyes? See a doctor immediately

Do you see small spots, lines, or floating objects before your eyes? If so, it's not advisable to...

Iranians appease Lord Shiva, creating a mountain of trouble for Trump

A photo has become the center of discussion this day. In this photo, an Iranian woman is singing...
news-1701

sabung ayam online

yakinjp

yakinjp

rtp yakinjp

slot thailand

yakinjp

yakinjp

yakin jp

yakinjp id

maujp

maujp

maujp

maujp

sabung ayam online

sabung ayam online

judi bola online

sabung ayam online

judi bola online

slot mahjong ways

slot mahjong

sabung ayam online

judi bola

live casino

sabung ayam online

judi bola

live casino

SGP Pools

slot mahjong

sabung ayam online

slot mahjong

SLOT THAILAND

article 138000586

article 138000587

article 138000588

article 138000589

article 138000590

article 138000591

article 138000592

article 138000593

article 138000594

article 138000595

article 138000596

article 138000597

article 138000598

article 138000599

article 138000600

article 138000601

article 138000602

article 138000603

article 138000604

article 138000605

article 138000606

article 138000607

article 138000608

article 138000609

article 138000610

article 138000611

article 138000612

article 138000613

article 138000614

article 138000615

article 138000616

article 138000617

article 138000618

article 138000619

article 138000620

article 138000621

article 138000622

article 138000623

article 138000624

article 138000625

article 138000626

article 138000627

article 138000628

article 138000629

article 138000630

article 138000631

article 138000632

article 138000633

article 138000634

article 138000635

article 138000636

article 138000637

article 138000638

article 138000639

article 138000640

article 138000641

article 138000642

article 138000643

article 138000644

article 138000645

article 138000646

article 138000647

article 138000648

article 138000649

article 138000650

article 138000651

article 138000652

article 138000653

article 138000654

article 138000655

article 138000656

article 138000657

article 138000658

article 138000659

article 138000660

article 138000661

article 138000662

article 138000663

article 138000664

article 138000665

article 138000666

article 138000667

article 138000668

article 138000669

article 138000670

article 138000671

article 138000672

article 138000673

article 138000674

article 138000675

article 158000426

article 158000427

article 158000428

article 158000429

article 158000430

article 158000436

article 158000437

article 158000438

article 158000439

article 158000440

article 208000456

article 208000457

article 208000458

article 208000459

article 208000460

article 208000461

article 208000462

article 208000463

article 208000464

article 208000465

article 208000466

article 208000467

article 208000468

article 208000469

article 208000470

208000446

208000447

208000448

208000449

208000450

208000451

208000452

208000453

208000454

208000455

article 228000306

article 228000307

article 228000308

article 228000309

article 228000310

article 228000311

article 228000312

article 228000313

article 228000314

article 228000315

article 238000301

article 238000302

article 238000303

article 238000304

article 238000305

article 238000306

article 238000307

article 238000308

article 238000309

article 238000310

article 238000311

article 238000312

article 238000313

article 238000314

article 238000315

article 238000316

article 238000317

article 238000318

article 238000319

article 238000320

article 238000321

article 238000322

article 238000323

article 238000324

article 238000325

article 238000326

article 238000327

article 238000328

article 238000329

article 238000330

article 238000331

article 238000332

article 238000333

article 238000334

article 238000335

article 238000336

article 238000337

article 238000338

article 238000339

article 238000340

article 238000341

article 238000342

article 238000343

article 238000344

article 238000345

article 238000346

article 238000347

article 238000348

article 238000349

article 238000350

article 238000351

article 238000352

article 238000353

article 238000354

article 238000355

article 238000356

article 238000357

article 238000358

article 238000359

article 238000360

article 238000361

article 238000362

article 238000363

article 238000364

article 238000365

article 238000366

article 238000367

article 238000368

article 238000369

article 238000370

article 238000371

article 238000372

article 238000373

article 238000374

article 238000375

article 238000376

article 238000377

article 238000378

article 238000379

article 238000380

sumbar-238000291

sumbar-238000292

sumbar-238000293

sumbar-238000294

sumbar-238000295

sumbar-238000296

sumbar-238000297

sumbar-238000298

sumbar-238000299

sumbar-238000300

sumbar-238000301

sumbar-238000302

sumbar-238000303

sumbar-238000304

sumbar-238000305

sumbar-238000306

sumbar-238000307

sumbar-238000308

sumbar-238000309

sumbar-238000310

sumbar-238000311

sumbar-238000312

sumbar-238000313

sumbar-238000314

sumbar-238000315

sumbar-238000316

sumbar-238000317

sumbar-238000318

sumbar-238000319

sumbar-238000320

sumbar-238000321

sumbar-238000322

sumbar-238000323

sumbar-238000324

sumbar-238000325

sumbar-238000326

sumbar-238000327

sumbar-238000328

sumbar-238000329

sumbar-238000330

sumbar-238000331

sumbar-238000332

sumbar-238000333

sumbar-238000334

sumbar-238000335

sumbar-238000336

sumbar-238000337

sumbar-238000338

sumbar-238000339

sumbar-238000340

sumbar-238000341

sumbar-238000342

sumbar-238000343

sumbar-238000344

sumbar-238000345

sumbar-238000346

sumbar-238000347

sumbar-238000348

sumbar-238000349

sumbar-238000350

sumbar-238000351

sumbar-238000352

sumbar-238000353

sumbar-238000354

sumbar-238000355

sumbar-238000356

sumbar-238000357

sumbar-238000358

sumbar-238000359

sumbar-238000360

sumbar-238000361

sumbar-238000362

sumbar-238000363

sumbar-238000364

sumbar-238000365

sumbar-238000366

sumbar-238000367

sumbar-238000368

sumbar-238000369

sumbar-238000370

sumbar-238000371

sumbar-238000372

sumbar-238000373

sumbar-238000374

sumbar-238000375

sumbar-238000376

sumbar-238000377

sumbar-238000378

sumbar-238000379

sumbar-238000380

news-1701