Jump to content
GoDuBois.com

So, what do you all think


fedup

Recommended Posts

About this new game in town?

Artificial intelligence
 
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
 
 
Now your tax dollars are being used to pay for the government spy that never stops. Hackers are waking up with a gleam in their eyes. 
 
How long will it be before it starts doing and controlling the already crooked voting process? 
Link to comment
Share on other sites

Tech giants are ill-prepared to combat "hallucinations" generated by artificial intelligence platforms, industry experts warned in comments to Fox News Digital, but corporations themselves say they're taking steps to ensure accuracy within the platforms. 

AI chatbots, such as ChatGPT and Google's Bard, can at times spew inaccurate misinformation or nonsensical text, referred to as "hallucinations." 

"The short answer is no, corporation and institutions are not ready for the changes coming or challenges ahead," said AI expert Stephen Wu, chair of the American Bar Association Artificial Intelligence and Robotics National Institute, and a shareholder with Silicon Valley Law Group. 

 

Often, hallucinations are honest mistakes made by technology that, despite promises, still possess flaws. 

Companies should have been upfront with consumers about these flaws, one expert said. 

"I think what the companies can do, and should have done from the outset … is to make clear to people that this is a problem," Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University in California, told Fox News Digital. 

"This shouldn’t have been something that users have to figure out on their own. They should be doing much more to educate the public about the implications of this."

Large language models, such as the one behind ChatGPT, take billions of dollars and years to train, Amazon CEO Andy Jassy told CNBC last week. 

In building Amazon's own foundation model Titan, the company was "really concerned" with accuracy and producing high-quality responses, Bratin Saha, an AWS vice president, told CNBC in an interview.

Microsoft's Bing Chat, which is based on the same large language model as ChatGPT, also links to sources where users can find more information about their queries, as well as allowing users to "like" or "dislike" answers given by the bot.

"We have developed a safety system including content filtering, operational monitoring and abuse detection to provide a safe search experience for our users," a Microsoft spokesperson told Fox News Digital. 

https://www.foxnews.com/lifestyle/misinformation-machines-tech-titans-grappling-stop-chatbot-hallucinations

Link to comment
Share on other sites

So your tax dollars are paying for a spy system that has hallucinations.  Works for me, how about you????

 

Don't forget people this thing, just like climate models, is taught by humans.  This thing regurgitates human opinions just like the climate models regurgitates human opinions. 

Link to comment
Share on other sites

  • 2 weeks later...

This just may be the gut wrenching belly laugh of the century.

 

 

TECH

Kamala Harris to discuss A.I. in meeting with Google, Microsoft, OpenAI and Anthropic CEOs

 
 
 
 
 
 
 
 
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...