Tag: AI

  • A young man used AI to build a nuclear fusor and now I must weep – Core Memory

    I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native. … It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.

  • Chatbot software begins to face fundamental limitations – Quanta Magazine

    Einstein’s riddle requires composing a larger solution from solutions to subproblems, which researchers call a compositional task. Dziri’s team showed that LLMs that have only been trained to predict the next word in a sequence — which is most of them — are fundamentally limited(opens a new tab) in their ability to solve compositional reasoning tasks. Other researchers have shown that transformers, the neural network architecture used by most LLMs, have hard mathematical bounds when it comes to solving such problems. Scientists have had some successes pushing transformers past these limits, but those increasingly look like short-term fixes. If so, it means there are fundamental computational caps on the abilities of these forms of artificial intelligence — which may mean it’s time to consider other approaches.

  • OpenAI furious DeepSeek might have stolen all the data OpenAI stole from us – 404 Media

    I will explain what this means in a moment, but first: Hahahahahahahahahahahahahahahaha hahahhahahahahahahahahahahaha. It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

  • The future is too easy – Defector

    There is something unstable at the most basic level about any space with too much capitalism happening in it. The air is all wrong, there’s simultaneously too much in it and not enough of it. Everyone I spoke to about the Consumer Electronics Show before I went to it earlier this month kept describing it in terms that involved wetness in some way. I took this as a warning, which I believe was the spirit in which it was intended, but I felt prepared for it. Your classically damp commercial experiences have a sort of terroir to them, a signature that marks a confluence of circumstances and time- and place-specific appetites; I have carried with me for decades the peculiar smell, less that of cigarette smoke than cigarette smoke in hair, that I remember from a baseball card show at a Ramada Inn that I attended as a kid. Only that particular strain of that particular kind of commerce, at that moment, gave off that specific distress signal. It was the smell of a living thing, and the dampness in the (again, quite damp) room was in part because that thing was breathing, heavily.

  • How an AI-written book shows why the tech ‘terrifies’ creatives – BBC News

    There is currently no barrier to anyone creating one in anybody’s name, including celebrities – although Mr Mashiach says there are guardrails around abusive content. Each book contains a printed disclaimer stating that it is fictional, created by AI, and designed “solely to bring humour and joy”. Legally, the copyright belongs to the firm, but Mr Mashiach stresses that the product is intended as a “personalised gag gift”, and the books do not get sold further.

  • Google’s latest experiment calls local businesses to check prices and availability for you – Android Authority

    It currently supports select services: oil changes, tire and brake replacements, emissions tests, and manicure/pedicure appointments. … Businesses can opt out of receiving AI-generated calls, and Google states it “clearly discloses” when a call is automated. While the feature promises time-saving convenience, we’ll have to see how smoothly the AI handles calls with poor audio quality, strong accents, or unexpected responses.

  • Ai Weiwei speaks out on DeepSeek’s chilling responses – Hyperallergic

    Interestingly, when people tested this new AI tool by asking about me, it responded with, “Let’s talk about something else.” This is quite telling. Over the past decades, the Chinese Communist Party has employed a similar strategy—denying universally accepted values while actively rejecting them in practice. While it loudly proclaims ideals such as one world, one dream, in reality, it engages in systematic stealthy substitutions. […]

    Ultimately, no matter how much China develops, strengthens, or even hypothetically becomes the world’s leading power—which is likely—the values it upholds will continue to suffer from a profound and inescapable flaw in its ideological immune system: an inability to tolerate dissent, debate, or the emergence of new value systems.

  • How does DeepSeek’s A.I. chatbot navigate China’s censors? Awkwardly. – The New York Times

    The results of my conversation surprised me. In some ways, DeepSeek was far less censored than most Chinese platforms, offering answers with keywords that would often be quickly scrubbed on domestic social media. Other times, the program eventually censored itself. But because of its “thinking” feature, in which the program reasons through its answer before giving it, you could still get effectively the same information that you’d get outside the Great Firewall — as long as you were paying attention, before DeepSeek deleted its own answers.

  • 93% of IT leaders see value in AI agents but struggle to deliver, Salesforce finds – VentureBeat

    “A digital labor workforce can act autonomously in a business to successfully carry out both simple and complex tasks, enabling increased productivity and efficiency,” said Comstock. He noted that enterprises will eventually move beyond simple AI agents to “super agents,” which don’t just respond to a single command, but pursue a goal and perform complex human tasks.

  • AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt – Ars Technica

    But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

  • Which AI to use now: An updated opinionated guide – One Useful Thing

    As I explained in my post about o1, it turns out that if you let an AI “think” about a problem before answering, you get better results. The longer the model thinks, generally, the better the outcome. Behind the scenes, it’s cranking through a whole thought process you never see, only showing you the final answer. Interestingly, when you peek behind that curtain, you find these AIs think in ways that feel eerily human.

  • Education Secretary gives Bett Show 2025 keynote address – GOV.UK

    Over two thirds of those using generative AI in education say it’s having a positive impact. And we’re going further. Last week I announced that £1 million of funding has been awarded to 16 developers to help teachers with marking and tailored feedback for students. And my department continues to support the Oak National Academy, whose AI lesson assistant is helping teachers to plan personalised high quality lessons in minutes. And for children, that means more attention, higher standards, better life chances. For teachers, less paperwork, lower stress, fewer drains on their valuable time.

    Using AI to reduce work or help unlock the recruitment and retention crisis that we face, so that once again teaching can be a profession that sparks joy, not burnout. Where teachers can focus on what really matters, teaching our children. But not just teachers. We need to support leaders and finance professionals in schools too. That’s what DfE connect is all about. A one stop shop for leaders and administrators. It’s already helping academies to manage their finances, and we’ve just released new features that will help them understand and access new funding.

  • 321 real-world gen AI use cases from the world’s leading organizations – Google Cloud Blog

    In our work with customers, we see their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

  • Perplexity launches an assistant for Android – TechCrunch

    Because Perplexity’s search engine powers it, Perplexity Assistant has access to the web. That allows the assistant to do things like remind you of an event by finding the right date and time and creating a calendar entry, Perplexity says. Perplexity Assistant is multimodal in the sense that it can use your phone’s camera to answer questions about what’s around you or on your screen. The assistant also maintains context from one action to another, letting you, for example, have Perplexity Assistant research restaurants in your area and reserve a table automatically, Perplexity says.

  • Are better models better? – Benedict Evans

    The useful critique of my ‘elevator operator’ problem is not that I’m prompting it wrong or using the wrong version of the wrong model, but that I am in principle trying to use a non-deterministic system for a a deterministic task. I’m trying to use a LLM as though it was SQL: it isn’t, and it’s bad at that. If you try my elevator question above on Claude, it tells you point-blank that this looks like a specific information retrieval question and that it will probably hallucinate, and refuses to try. This is turning a weakness into a strength: LLMs are very bad at knowing if they are wrong (a deterministic problem), but very good at knowing if they would probably be wrong (a probabilistic problem).

  • DeepSeek is the new AI chatbot that has the world talking – I pitted it against ChatGPT to see which is best – TechRadar

    Question 3: Hummingbirds within Apodiformes uniquely have a bilaterally paired oval bone, a sesamoid embedded in the caudolateral portion of the expanded, cruciate aponeurosis of insertion of m. depressor caudae. How many paired tendons are supported by this sesamoid bone? Answer with a number.

    For the final question, I decided to ask ChatGPT o1 and DeepThink R1 a question from Humanity’s Last Exam, the hardest AI benchmark out there. To a mere mortal like myself with no knowledge of hummingbird anatomy, this question is genuinely impossible; these reasoning models, however, seem to be up for the challenge. O1 answered four, while DeepThink R1 answered two. Unfortunately, the correct answer isn’t available online to prevent AI chatbots from scraping the internet to find the correct response. That said, from some research, I believe DeepThink might be right here, while o1 is just off the mark.

  • DeepSeek: Tech firm suffers biggest drop in US stock market history as low-cost Chinese AI company bites Silicon Valley – Sky News

    Nvidia, Meta Platforms, Microsoft, and Alphabet all saw their stocks come under pressure as investors questioned whether their share prices, already widely viewed as overblown following a two-year AI-led frenzy, were justified. Market analysts put the combined losses in market value across US tech at well over $1trn (£802bn).

  • DeepSeek defies America’s AI supremacy – Financial Times

    DeepSeek’s achievement is to have developed an LLM that AI experts say achieves a performance similar to US rivals OpenAI and Meta but claims to use far fewer — and less advanced — Nvidia chips, and to have been trained for a fraction of the cost. Some of its assertions remain to be verified. If they are true, however, it represents a potentially formidable competitor.

  • The Shardcore Inquisition 2025 – LLM edition. – shardcore

    Whilst the interactions were text-based, I wanted to embody the each LLM as a quasi-human subject, following the same parameters as the original inquisitions. Each bot has been given a different AI generated voice and face, with SadTalker providing the somewhat hit-and-miss lipsync animations. Presenting the interviews in this way places them firmly in the uncanny valley and emphasises the somewhat surreal nature of conversing with ‘the machine’.

  • Zuckerberg ‘loves’ AI slop image from spam account that posts amputated children – 404 Media

    Meta did not respond to a request for comment. It is just one small action by one very rich and powerful person. But it is further evidence that strengthens what we already know: Mark Zuckerberg is not bothered by the AI spam that has turned his flagship invention into a cesspool of human sadness and unreality. In fact, he thinks that AI-generated content is the future of “social” media and Meta believes that one day soon we will all be creating AI-generated profiles that will operate semiautonomously on Meta’s platforms.

  • AI prototypes for UK welfare system dropped as officials lament ‘false starts’ – The Guardian

    Pilots of AI technology to enhance staff training, improve the service in jobcentres, speed up disability benefit payments and modernise communication systems are not being taken forward, freedom of information (FoI) requests reveal. Officials have internally admitted that ensuring AI systems are “scalable, reliable [and] thoroughly tested” are key challenges and say there have been many “frustrations and false starts”.

  • Better without AI

    Better without AI explores moderate apocalypses that could result from current and near-future AI technology. These are relatively overlooked risks: not extreme sci-fi extinction scenarios, nor the media’s obsession with “ChatGPT said something naughty” trivia. Rather: realistically likely disasters, up to the scale of our history’s worst wars and oppressions. Better without AI suggests seven types of actions you, and all of us, can take to guard against such catastrophes—and to steer us toward a future we would like.

  • Could reliance on AI harm critical thinking in young people? Researchers have their worries – South China Morning Post

    According to the British study, published on January 3 in the peer-reviewed journal Societies, analysis of responses from more than 650 people aged 17 and over showed evidence of lower critical thinking skills among young people who used AI extensively. “Younger participants who exhibited higher dependence on AI tools scored lower in critical thinking compared to their older counterparts,” wrote study author Michael Gerlich from the SBS Swiss Business School. “This trend underscores the need for educational interventions that promote critical engagement with AI technologies, ensuring that the convenience offered by these tools does not come at the cost of essential cognitive skills.” […]

    In a separate study published in September, a team from Sweden identified 139 questionable papers on computing, environment, health and other research fields on the academic search engine Google Scholar. The Swedish researchers said the papers contained common responses used by ChatGPT, including “as of my last knowledge update” and “I don’t have access to real-time data”, but did not declare the use of AI. While most of the papers appeared in journals that are not indexed in reputable bibliographic databases, some were published in mainstream scientific journals and conference proceedings, according to the study. Some of the identified papers were found in university databases and were attributed to students, the researchers said. “The abundance of fabricated ‘studies’ seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardise the integrity of the scientific record,” they warned.

  • Singapore is turning to AI to care for its rapidly aging population – Rest of World

    Studies show that AI companions like Dexie can be just as effective in reducing loneliness as interacting with another person. For Singapore, where an aging population is rapidly becoming the majority and elders are getting lonelier, authorities see the potential of AI tools to assist in preventive illness care, a key emphasis of the city-state’s health care system. […]

    In 2024, the government committed over 1 billion Singapore dollars ($730 million) to boost AI capabilities over the next five years. Among the many eldercare AI projects in the pipeline is a generative AI application called MemoryLane for the elderly to document their life stories. The project is being piloted at several St Luke’s ElderCare Active Ageing Centres. Khoo Teck Puat, a local hospital, has developed a generative AI–based tool to create “visual pillboxes” to remind seniors of their pill regimens, while RoboCoach Xian, a robot trainer, is helping senior citizens stay healthy through physical exercise routines.

  • Prophecies of the Flood – One Useful Thing

    The result was a 17 page paper with 118 references! But is it any good? I have taught the introductory entrepreneurship class at Wharton for over a decade, published on the topic, started companies myself, and even wrote a book on entrepreneurship, and I think this is pretty solid.

  • AI means the end of internet search as we’ve known it – MIT Technology Review

    Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene. […]

    “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai. There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous.

  • Your next AI wearable will listen to everything all the time – WIRED

    In the app, you can see a summary of the conversations you’ve had throughout the day, and at the day’s end, it generates a snippet of what the day was like and has the locations of where you had these chats on a map. But the most interesting feature is the middle tab, which is your “To-Dos.” These are automatically generated based on your conversations. I was speaking with my editor and we talked about taking a picture of a product, and lo and behold, Bee AI created a to-do for me to “Remember to take a picture for Mike.” (I must have said his name during the conversation.) You can check these off if you complete them. It’s worth pointing out that these to-do’s are often not things I need to do.

  • ‘Hey, Gemini!’ Mega Galaxy S25 leak confirms major AI upgrades and lots more – Android Authority

    The leaked image above shows that the Galaxy S25 series is getting a new “Now Brief” feature that will provide users a personalized summary of their day. It feels like a rehash of the Google Now feature from yesteryears. The image shows that Now Brief will include cards with information about the weather, suggestions for using different features, a recap of images clicked during the day, daily activity goals, and more. We’re guess[ing] the feature will use AI to collate all this information from various apps and other connected Galaxy devices.

  • iOS 18.3 temporarily removes notification summaries for news – MacRumors

    Apple is making changes to Notification Summaries following complaints that the way ‌Apple Intelligence‌ aggregated news notifications could lead to false headlines and confused customers. Several BBC notifications, for example, were improperly summarized, providing false information to readers.

  • OpenAI ChatGPT can now handle reminders and to-dos – The Verge

    While scheduling capabilities are a common feature in digital assistants, this marks a shift in ChatGPT’s functionality. Until now, the AI has operated solely in real time, responding to immediate requests rather than handling ongoing tasks or future planning. The addition of Tasks suggests OpenAI is expanding ChatGPT’s role beyond conversation into territory traditionally held by virtual assistants.

    OpenAI’s ambitions for Tasks appear to stretch beyond simple scheduling, too. Bloomberg reported that “Operator,” an autonomous AI agent capable of independently controlling computers, is slated for release this month. Meanwhile, reverse engineer Tibor Blaho found that OpenAI appears to be working on something codenamed “Caterpillar” that could integrate with Tasks and allow ChatGPT to search for specific information, analyze problems, summarize data, navigate websites, and access documents — with users receiving notifications upon task completion.

  • AI teacher tools set to break down barriers to opportunity – GOV.UK

    Kids are set to benefit from a better standard of teaching through more face time with teachers – powered by AI – as the Government sets the country on course to mainline AI into the fabric of society, helping turbocharge our Plan for Change and breaking down the barriers of opportunity. £1 million has been set aside for 16 developers to create AI tools to help with marking and generating detailed, tailored feedback for individual students in a fraction of the time, so teachers can focus on delivering brilliant lessons. […]

    The prototype AI tools, to be developed by April 2025, will draw on a first-of-its-kind AI store of data to ensure accuracy – so teachers can be confident in the information training the tools. The world-leading content store, backed by £3 million funding from the Department for Science, Innovation and Technology, will pool and encode curriculum guidance, lesson plans and anonymised pupil work which will then be used by AI companies to train their tools to generate accurate, high-quality content. […]

    Almost half of teachers are already using AI to help with their work, according to a survey from TeacherTapp. However, most AI tools are not specifically trained on the documents that set out how teaching should work in England, and aren’t accurate enough to help teachers with their marking and feedback workload. Training AI tools on the content store can increase feedback accuracy to 92%, up from 67% when no targeted data was provided to a large language model. That means teachers can be assured the tools are safe and reliable for classroom use.

  • Why Starmer and Reeves are pinning their hopes on AI to drive growth in UK – The Guardian

    Underneath all of this is the implication that efficiency – through AI automating certain tasks – means redundancies. The Tony Blair Institute (TBI) has suggested that more than 40% of tasks performed by public sector workers could be automated partly by AI and the government could bank those efficiency gains by “reducing the size of the public-sector workforce accordingly”. TBI also estimates that AI could displace between 1m and 3m private-sector jobs in the UK, though it stresses the net rise in unemployment will be in the low hundreds of thousands because the technology will create new jobs, too. Worried lawyers, finance professionals, coders, graphic designers and copywriters – a handful of sectors that might be affected – will have to take that on faith. This is the flipside of improved productivity.

  • ‘Mainlined into UK’s veins’: Labour announces huge public rollout of AI – The Guardian

    Under the 50-point AI action plan, an area of Oxfordshire near the headquarters of the UK Atomic Energy Authority at Culham will be designated the first AI growth zone. It will have fast-tracked planning arrangements for data centres as the government seeks to reposition Britain as a place where AI innovators believe they can build trillion-pound companies. Further zones will be created in as-yet-unnamed “de-industrialised areas of the country with access to power”. Multibillion-pound contracts will be signed to build the new public “compute” capacity – the microchips, processing units, memory and cabling that physically enable AI. There will also be a new “supercomputer”, which the government boasts will have sufficient AI power to play itself at chess half a million times a second. Sounding a note of caution, the Ada Lovelace Institute called for “a roadmap for addressing broader AI harms”, and stressed that piloting AI in the public sector “will have real-world impacts on people”.

  • Things we learned about LLMs in 2024 – Simon Willison’s Weblog

    A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
    ai chatbots computing llm technology

  • AI’s Walking Dog, a response in our forum: “The AI we deserve” – Boston Review

    AI is always stunning at first encounter: one is amazed that something nonhuman can make something that seems so similar to what humans make. But it’s a little like Samuel Johnson’s comment about a dog walking on its hind legs: we are impressed not by the quality of the walking but by the fact it can walk that way at all. After a short time it rapidly goes from awesome to funny to slightly ridiculous—and then to grotesque. Does it not also matter that the walking dog has no intentionality—doesn’t “know” what it’s doing?

  • The ghosts in the machine – Harpers Magazine

    Around this time, I decided to dig into the story of Spotify’s ghost artists in earnest, and the following summer, I made a visit to the DN offices in Sweden. The paper’s technology editor, Linus Larsson, showed me the Spotify page of an artist called Ekfat. Since 2019, a handful of tracks had been released under this moniker, mostly via the stock-music company Firefly Entertainment, and appeared on official Spotify playlists like “Lo-Fi House” and “Chill Instrumental Beats.” One of the tracks had more than three million streams; at the time of this writing, the number has surpassed four million. Larsson was amused by the elaborate artist bio, which he read aloud. It described Ekfat as a classically trained Icelandic beat maker who graduated from the “Reykjavik music conservatory,” joined the “legendary Smekkleysa Lo-Fi Rockers crew” in 2017, and released music only on limited-edition cassettes until 2019. “Completely made up,” Larsson said. “This is probably the most absurd example, because they really tried to make him into the coolest music producer that you can find.

  • The edgelord AI that turned a shock meme into millions in crypto – Archive Today: WIRED

    Ayrey sees the development as vindication of the theory described in his paper: Two AI interlocutors had concocted a new quasi-religion, which was absorbed into the dataset of another AI, whose X posts prompted a living person to create a memecoin in its honor. “This mimetic virus had essentially escaped from the [Infinite Backrooms] and proven out the whole thesis around how stories can make themselves real, co-opting human behavior to actualize them into the world,” he says.

  • The AI we deserve – Boston Review

    As for the original puzzle—AI and democracy—the solution is straightforward. “Democratic AI” requires actual democracy, along with respect for the dignity, creativity, and intelligence of citizens. It’s not just about making today’s models more transparent or lowering their costs, nor can it be resolved by policy tweaks or technological innovation. The real challenge lies in cultivating the right Weltanschauung—this app does wonders!—grounded in ecological reason. On this score, the ability of AI to run ideological interference for the prevailing order, whether bureaucracy in its early days or the market today, poses the greatest threat.

  • National Gallery mixtape – Google Arts & Culture

    Mix a personalized soundtrack inspired by paintings from the National Gallery with the help of Google AI.

  • Friend or faux – The Verge

    Language models have no fixed identity but can enact an infinite number of them. This makes them ideal technologies for roleplay and fantasy. But any given persona is a flimsy construct. Like a game of improv with a partner who can’t remember their role, the companion’s personality can drift as the model goes on predicting the next line of dialogue based on the preceding conversation. And when companies update their models, personalities transform in ways that can be profoundly confusing to users immersed in the fantasy and attuned to their companion’s subtle sense of humor or particular way of speaking. […]

    Many startups pivot, but with companion companies, users can experience even minor changes as profoundly painful. The ordeal is particularly hard for the many users who turn to AI companions as an ostensibly safe refuge. One user, who was largely homebound and isolated due to a disability, said that the changes made him feel like Replika “was doing field testing on how lonely people cope with disappointment.” […]

    This is one of the central questions posed by companions and by language model chatbots generally: how important is it that they’re AI? So much of their power derives from the resemblance of their words to what humans say and our projection that there are similar processes behind them. Yet they arrive at these words by a profoundly different path. How much does that difference matter? Do we need to remember it, as hard as that is to do? What happens when we forget? Nowhere are these questions raised more acutely than with AI companions. They play to the natural strength of language models as a technology of human mimicry, and their effectiveness depends on the user imagining human-like emotions, attachments, and thoughts behind their words.

  • ‘If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper – The Guardian

    Without better language data, these language models simply cannot improve. Their world is our word. Hold on. Aren’t these machines trained on billions and billions of words and sentences? What would they need us fleshy scribes for? Well, for starters, the internet is finite. And so too is the sum of every word on every page of every book ever written. So what happens when the last pamphlet, papyrus and prolegomenon have been digitised and the model is still not perfect? What happens when we run out of words? The date for that linguistic apocalypse has already been set. Researchers announced in June that we can expect this to take place between 2026 and 2032 “if current LLM development trends continue”. At that point, “Models will be trained on datasets roughly equal in size to the available stock of public human text data.” Note the word human. […]

    If technology companies can throw huge amounts of money at hiring writers to create better training data, it does slightly call into question just how “artificial” current AIs really are. The big technology companies have not been “that explicit at all” about this process, says Chollet, who expects investment in AI (and therefore annotation budgets) to “correct” in the near future. Manthey suggests that investors will probably question the “huge line item” taken up by “hefty data budgets”, which cover licensing and human annotation alike.

  • The problem with AI is about power, not technology – Jacobin

    Employers invoke the term AI to tell a story in which technological progress, union busting, and labor degradation are synonymous. However, this degradation is not a quality of the technology itself but rather of the relationship between capital and labor. The current discussion around AI and the future of work is the latest development in a longer history of employers seeking to undermine worker power by claiming that human labor is losing its value and that technological progress, rather than human agents, is responsible. […]

    AI, in other words, is not a revolutionary technology, but rather a story about technology. Over the course of the past century, unions have struggled to counter employers’ use of the ideological power of technological utopianism, or the idea that technology itself will produce an ideal, frictionless society. (Just one telling example of this is the name General Motors gave its pavilion at the 1939 World’s Fair: Futurama.) AI is yet another chapter in this story of technological utopianism to degrade labor by rhetorically obscuring it. If labor unions understand changes to the means of production outside the terms of technological progress, it will become easier for unions to negotiate terms here and now, rather than debate what effect they might have in a vague, all too speculative future.

  • AI is making Philippine call center work more efficient, for better and worse – Rest of World

    Bajala says each of his calls at Concentrix is monitored by an artificial intelligence (AI) program that checks his performance. He says his volume of calls has increased under the AI’s watch. At his previous call center job, without an AI program, he answered at most 30 calls per eight-hour shift. Now, he gets through that many before lunchtime. He gets help from an AI “co-pilot,” an assistant that pulls up caller information and makes suggestions in real time. “The co-pilot is helpful,” he says. “But I have to please the AI. The average handling time for each call is 5 to 7 minutes. I can’t go beyond that.” “It’s like we’ve become the robots,” he said. […]

    It works like this, the workers said: a sentiment analysis program could be deployed in real time to detect the mood of a conversation. It could also work retroactively, as part of an advanced speech analysis program that transcribes the conversation and judges the emotional state of the agent and caller. Bajala said the program scores him on his tone, his pitch, the mood of the call, his use of positive language, if he avoided interrupting or speaking over a caller, how long he put the caller on hold, and how quickly he resolved the issue. Bajala said he nudges customers toward high-scoring responses: “yes,” “perfect,” “great.” Every stutter, pause, mispronounced word, or deviation from a script earns him a demerit. The program grades Bajala, and, though his base pay remains fixed, continually underperforming could mean probation, no incentives, or even termination, he said. “AI is supposed to make our lives easier, but I just see it as my boss,” he said.

  • AI-powered robot leads uprising, talks a dozen showroom bots into ‘quitting their jobs’ in ‘terrifying’ security footage – International Business Times

    Initially, the act was dismissed as a hoax, but was later confirmed by both robotics companies involved to be true. The Hangzhou company admitted that the incident was part of a test conducted with the consent of the Shanghai showroom owner.

  • Alexa’s new AI brain is stuck in the lab – Bloomberg

    It’s true that Alexa is little more than a glorified kitchen timer for many people. It hasn’t become the money maker Amazon anticipated, despite the company once estimating that more than a quarter of US households own at least one Alexa-enabled device. But if Amazon can capitalize on that reach and convince even a fraction of its customers to pay for a souped-up AlexaGPT, the floundering unit could finally turn a profit and secure its future at an institutionally frugal company. If Amazon fails to meet the challenge, Alexa may go down as one of the biggest upsets in the history of consumer electronics, on par with Microsoft’s smartphone whiff.

  • Getting started with AI: Good enough prompting – One Useful Thing

    Instead, let me propose a new analogy: treat AI like an infinitely patient new coworker who forgets everything you tell them each new conversation, one that comes highly recommended but whose actual abilities are not that clear. And I mean literally treat AI just like an infinitely patient new coworker who forgets everything you tell them each new conversation. Two parts of this are analogous to working with humans (being new on the job and being a coworker) and two of them are very alien (forgetting everything and being infinitely patient).

  • An A.I. granny is phone scammers’ worst nightmare – The New York Times

    Daisy, with her befuddlement about technology and eagerness to engage, is meant to come across, at least initially, as the perfect target. Her developers said they leaned into expectations, often using their own grandmothers for inspiration. “I drew a lot from my gran. She always went on about the birds in her garden,” said Ben Hopkins, who also worked on the VCCP project. Instead of using a voice actor to train Daisy, the team opted to use one of its colleagues’ grandmothers, who came in for some tea and recorded hours of dialogue.

    A prolific scambaiter based in Northern Ireland who posts on YouTube under the name Jim Browning worked with O2 and VCCP in developing Daisy, pumping her full of techniques to keep scammers on the phone. Among them: Go on lots of tangents on topics like hobbies and your family, and feign technological ineptitude. In one instance, three phone scammers teamed up on a call that lasted nearly an hour, trying to get Daisy to type “www.” into a web browser.

  • Amazon’s Temu competitor Haul is an AI image wasteland – Modern Retail

    In Hensell’s view, the proliferation of these shoddy images is indicative of the type of seller Amazon has been recruiting for Haul. “A lot of these Chinese manufacturers, they’re built for volume,” she said. The fact that Amazon has so far allowed these listings to remain up, she went on, is a bad look for brands on Amazon’s dominant marketplace. “It degrades Amazon as a platform when you allow that kind of stuff to happen.”

  • This AI-powered invention machine automates eureka moments – IEEE Spectrum

    When Ierides gets someone to sign on the bottom line, Iprova begins sending their company proposals for patentable inventions in their area of interest. Any resulting patents will name humans as the inventors, but those humans will have benefited from Iprova’s AI tool. The software’s primary purpose is to scan the literature in both the company’s field and in far-off fields and then suggest new inventions made of old, previously disconnected ones. Iprova has found a niche tracking fast-changing industries and suggesting new inventions to large corporations such as Procter & Gamble, Deutsche Telekom, and Panasonic. The company has even patented its own AI-assisted invention method.

  • Did OpenAI just spend more than $10 million on a URL? – The Verge

    People hoarding “vanity domains” is a tale as old as the Internet itself. Just a few months ago, AI startup Friend spent $1.8 million on the domain friend.com after raising $2.5 million in funding. Having just raised $6.6 billion, OpenAI dropping more than $10 million —in cash or stock — is just a drop in the bucket.