General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsIs the AI Conveyor Belt of Capital About to Stop?
https://gizmodo.com/is-the-ai-conveyor-belt-of-capital-about-to-stop-2000671017-snip-
Round-tripping, generally speaking, refers to the unethical and typically illegal practice of making trades or transactions to artificially prop up a particular asset or company, making it look like its more valuable and in demand than it actually is. In this case, it would be tech companies that are trying to make it appear like they are more valuable than they actually are by announcing big deals with each other that move the stock price.
-snip-
Whether that is happening currently or not is kind of in the eye of the beholder. OpenAI has certainly shown advancements in its technology. The release of its Sora 2 video generation model has unleashed a fresh hell upon the world, used to generate significant amounts of copyright violations and misinformation. But the latest version of the companys flagship model, GPT-5, underwhelmed and failed to live up to expectations when it was released in August.
Adoption rates of the technology are also a bit of a Rorschach test. The company boasts that 10% of the world is using ChatGPT, and nearly 80% of the business world says that its looking into how to utilize the technology. But the early adopters arent finding much utility. According to a survey from the Massachusetts Institute of Technology, 95% of companies that have tried to integrate generative AI tools into their operations have produced zero return on investment.
-snip-
Much more at the link. Can you say hype-created bubble? I knew you could.
But a lot of AI proponents are easily satisfied and distracted by being allowed to play with AI tools to create AI slop. Talk about a cheap magic trick. Abracadabra, you're smart! Abracadabra, you're creative! At least if you don't check for the errors, or think about the worldwide theft of intellectual property to train the AI. Which would bring the entire generative AI house of cards crashing down, if these companies had to pay for even a fraction of what they stole.
Abracadabra, your business is more successful! hasn't worked out quite as well. Pesky things, those facts that the LLMs behind genAI really have no true grasp of, no matter how much stolen intellectual property is added to them as the AI companies scrape the internet to death.
And we really haven't seen yet just how much of an even more addled mess the Trump regime will make of our government on generative AI, or how long it will take to fix that mess.
We have Sam Altman of OpenAI and the truly stupid adoption of ChatGPT (give it free to students who'll cheat with it but never notice the errors) to blame for much of the hype. But other AI bros and companies were quick to follow, aided by gullible users. Who may soon see their AI toys vanish or be priced out of their reach. They were bait, after all. Free or cheap bait to create addicts. Even lying Sam Altman admitted OpenAI was still losing money on subscriptions priced at hundreds of dollars.
But people not caught up in the insane charade don't deserve the harm that will result when.the bubble bursts, any more than they deserved the harm already done by AI companies and AI users.
Klarkashton
(4,464 posts)Instruction manuals and waste untold amounts of natural resources for nothing.
They would rather piss money away on this shit than actually improve the quality of life on earth.
ProfessorGAC
(74,966 posts)...the rewrite barely exists.
It's an automated copy & paste from Wikipedia.
I've seen it dozens of times.
Klarkashton
(4,464 posts)One of the many basic flaws is that it's 'learning' from an ever increasing archive of defective 'knowledge'.
highplainsdem
(58,786 posts)Bernardo de La Paz
(59,845 posts)Company N buys a stake in Company A which uses the money to buy chips from Company N.
(I don't see it in the excerpt, though it does obliquely explain it.)
Hugin
(37,019 posts)The techbros dream far preceded AI of making crypto worth what they saw on their fantasy balance sheets.
For example, Smucks claim that his xAI is worth far more than Tesla at $8 Billion. Ask yourself who thinks xAI is worth $8 Billion and the answer is exactly nobody.
highplainsdem
(58,786 posts)https://www.tomshardware.com/pc-components/gpus/nvidia-backs-20-billion-xai-chip-deal
The arrangement effectively finances purchases of Nvidias own hardware, while ensuring that xAI secures priority access to GPUs during a period of tightening supply. It also gives Nvidia a foothold in one of the most aggressive AI training deployments in the United States. The chips are earmarked for Colossus 2, xAIs 100 MW Memphis site, which came online earlier this year. Musk has plans to double the sites GPU count to 200,000.
In September, Musk publicly denied rumors that claimed xAI was raising $10 billion at a $200 billion valuation, taking to X to say, Fake news. xAI is not raising any capital right now. The deal now described by Bloomberg is larger, more complex, and more tightly bound to Nvidia. Reuters confirmed that the chips would be used at xAIs Memphis site, while others believe this to be a continuation of the AI industry's circular financing.
-snip-
Ha! I knew it was fricking outrageous. But, I am old skool. $10 is still a lot of money to me.
Thanks!
Prairie_Seagull
(4,486 posts)I saw a piece on BBC this morning about the need of big business to have a back up for being hacked, the one mentioned was pen and paper. Maybe we need a back up for AI, as a minimum for cheaters in schools IMO pen and paper would work for this as well.
Irony is of course alive and well.
highplainsdem
(58,786 posts)computers that many have no contingency plans if they're hacked.
And yes, schools should use pen and paper for tests. Though they might have to make sure the kids aren't wearing glasses with AI built in.