Tag: AI

  • A deadly love affair with a chatbot – Der Spiegel

    In hindsight, one can say that Sewell’s parents tried everything. They spoke with their son. They tried to find out what was bothering him. What he was doing on his phone all those hours in his room. Nothing, Sewell told them. He showed them his Instagram and TikTok accounts. They found hardly any posts from him; he only watched a few videos now and then. They looked at his WhatsApp history but found nothing unsettling – and that in itself was unsettling, given that their son was becoming less and less reachable. They agreed that they would take his cell phone from him at bedtime.

    They had never heard of Character:AI, the app which, with the help of artificial intelligence and information provided by the user, creates digital personalities that speak and write like real people – chatbots, basically. And their son told them nothing of his secret world in which, he believed, a girl named Daenerys Targaryen was waiting for him to share her life with him.

  • Better images of AI

    Abstract, futuristic or science-fiction-inspired images of AI hinder the understanding of the technology’s already significant societal and environmental impacts. Images relating machine intelligence to human intelligence set unrealistic expectations and misstate the capabilities of AI. Images representing AI as sentient robots mask the accountability of the humans actually developing the technology, and can suggest the presence of robots where there are none. Such images potentially sow fear, and research shows they can be laden with historical assumptions about gender, ethnicity and religion. However, finding alternatives can be difficult! That’s why we, a non-profit collaboration, are researching, creating, curating and providing Better Images of AI.

  • Anthropic Economic Index: insights from Claude 3.7 Sonnet – Anthropic

    Briefly, our latest results are the following: Since the launch of Claude 3.7 Sonnet, we’ve observed a rise in the share of usage for coding, as well as educational, science, and healthcare applications; People use Claude 3.7 Sonnet’s new “extended thinking” mode predominantly for technical tasks, including those associated with occupations like computer science researchers, software developers, multimedia animators, and video game designers; We’re releasing data on augmentation / automation breakdowns on a task- and occupation-level. For example, tasks associated with copywriters and editors show the highest amount of task iteration, where the human and model co-write something together. By contrast, tasks associated with translators and interpreters show among the highest amounts of directive behavior—where the model completes the task with minimal human involvement.

  • Free tech eliminates the fear of public speaking – University of Cambridge

    As revealed in a recent publication from Macdonald – Director of the Immersive Technology Lab at Lucy Cavendish College, University of Cambridge – the platform increases levels of confidence and enjoyment for most users after a single 30-minute session. In the most recent trial with students from Cambridge and UCL, it was found that a week of self-guided use was beneficial to 100% of users. The platform helped participants feel more prepared, adaptable, resilient, confident, better able to manage anxiety. […]

    With the new VR platform, a user can experience the sensation of presenting to a wide range of photorealistic audiences. What makes Macdonald’s invention unique is that it uses what he calls ‘overexposure therapy’ where users can train in increasingly more challenging photorealistic situations – eventually leading to extreme scenarios that the user is unlikely to encounter in their lifetime. They might begin by presenting to a small and respectful audience but as they progress, the audience sizes increase and there are more distractions: spectators begin to look disinterested, they walk out, interrupt, take photos, and so on. A user can progress to the point where they can present in a hyper-distracting stadium environment with loud noises, panning stadium lights and 10,000 animated spectators.

  • If Anthropic succeeds, a nation of benevolent AI geniuses could be born – WIRED

    It would seem an irresolvable dilemma: Either hold back and lose or jump in and put humanity at risk. Amodei believes that his Race to the Top solves the problem. It’s remarkably idealistic. Be a role model of what trustworthy models might look like, and figure that others will copy you. “If you do something good, you can inspire employees at other companies,” he explains, “or cause them to criticize their companies.” Government regulation would also help, in the company’s view. … DeepMind’s Hassabis says he appreciates Anthropic’s efforts to model responsible AI. “If we join in,” he says, “then others do as well, and suddenly you’ve got critical mass.” He also acknowledges that in the fury of competition, those stricter safety standards might be a tough sell. “There is a different race, a race to the bottom, where if you’re behind in getting the performance up to a certain level but you’ve got good engineering talent, you can cut some corners,” he says. “It remains to be seen whether the race to the top or the race to the bottom wins out.” […]

    Even as Amodei is frustrated with the public’s poor grasp of AI’s dangers, he’s also concerned that the benefits aren’t getting across. Not surprisingly, the company that grapples with the specter of AI doom was becoming synonymous with doomerism. So over the course of two frenzied days he banged out a nearly 14,000-word manifesto called “Machines of Loving Grace.” Now he’s ready to share it. He’ll soon release it on the web and even bind it into an elegant booklet. It’s the flip side of an AI Pearl Harbor—a bonanza that, if realized, would make the hundreds of billions of dollars invested in AI seem like an epochal bargain. One suspects that this rosy outcome also serves to soothe the consciences of Amodei and his fellow Anthros should they ask themselves why they are working on something that, by their own admission, might wipe out the species.

    The vision he spins makes Shangri-La look like a slum. Not long from now, maybe even in 2026, Anthropic or someone else will reach AGI. Models will outsmart Nobel Prize winners. These models will control objects in the real world and may even design their own custom computers. Millions of copies of the models will work together—imagine an entire nation of geniuses in a data center! Bye-bye cancer, infectious diseases, depression; hello lifespans of up to 1,200 years.

  • The first trial of generative AI therapy shows it might help with depression – MIT Technology Review

    Jean-Christophe Bélisle-Pipon, an assistant professor of health ethics at Simon Fraser University who has written about AI therapy bots but was not involved in the research, says the results are impressive but notes that just like any other clinical trial, this one doesn’t necessarily represent how the treatment would act in the real world. “We remain far from a ‘greenlight’ for widespread clinical deployment,” he wrote in an email.

    One issue is the supervision that wider deployment might require. During the beginning of the trial, Jacobson says, he personally oversaw all the messages coming in from participants (who consented to the arrangement) to watch out for problematic responses from the bot. If therapy bots needed this oversight, they wouldn’t be able to reach as many people.

    I asked Jacobson if he thinks the results validate the burgeoning industry of AI therapy sites. “Quite the opposite,” he says, cautioning that most don’t appear to train their models on evidence-based practices like cognitive behavioral therapy, and they likely don’t employ a team of trained researchers to monitor interactions. “I have a lot of concerns about the industry and how fast we’re moving without really kind of evaluating this,” he adds.

  • Well, that’s not good – Futurism

    In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more “problematic use,” defined in the paper as “indicators of addiction… including preoccupation, withdrawal symptoms, loss of control, and mood modification.” … Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a “friend.” The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too.

  • OpenAI halts Studio Ghibli-style images trend, citing ‘important questions and concerns’ by the creative community – eWeek

    If you’ve been wondering why your social media feeds have been awash with Studio Ghibli-style images this week, OpenAI’s new image generator is the answer. On Tuesday, the company embedded the multimodal tool into GPT-4o, and users have been transforming their photos into vibrant, whimsical scenes reminiscent of the Japanese animation studio behind “Spirited Away” and “My Neighbor Totoro.” … However, the fun didn’t last long. The system card for GPT-4o’s native image generator now states that OpenAI “added a refusal which triggers when a user attempts to generate an image in the style of a living artist.” OpenAI acknowledged that the fact its tool can emulate named artists’ styles “has raised important questions and concerns within the creative community.”

  • No elephants: Breakthroughs in image generation – One Useful Thing

    Yet it is clear that what has happened to text will happen to images, and eventually video and 3D environments. These multimodal systems are reshaping the landscape of visual creation, offering powerful new capabilities while raising legitimate questions about creative ownership and authenticity. The line between human and AI creation will continue to blur, pushing us to reconsider what constitutes originality in a world where anyone can generate sophisticated visuals with a few prompts. Some creative professions will adapt; others may be unchanged, and still others may transform entirely. As with any significant technological shift, we’ll need well-considered frameworks to navigate the complex terrain ahead. The question isn’t whether these tools will change visual media, but whether we’ll be thoughtful enough to shape that change intentionally.

  • Chaos bewitched: Moby-Dick and AI – The Public Domain Review

    Each of these seemed to me “a boggy, soggy, squitchy picture truly”. And indeed of any of them I might be tempted to cry out something along the lines of, “It’s the Black Sea in a midnight gale!” or, “It’s the unnatural combat of the four primal elements!” Or perhaps even, “It’s a Hyperborean winter scene! It’s the breaking-up of the icebound stream of Time!” Further iteration was called for. The middle version possessed, to my eye, a dark central form of peculiarly leviathanic nebulosity. Onward!

  • ByteDance’s InfiniteYou lets users generate unlimited variations of portrait photos – The Decoder

    ByteDance has developed a new approach to AI portrait generation that tackles common problems like inconsistent facial features and poor prompt following. Unlike previous solutions such as PuLID-FLUX that directly modify AI model attention, InfuseNet processes facial features as a parallel information layer. This keeps the core AI model intact while improving portrait generation quality.

  • Powerful A.I. is coming. We’re not ready. – The New York Times

    Maybe A.I. progress will hit a bottleneck we weren’t expecting — an energy shortage that prevents A.I. companies from building bigger data centers, or limited access to the powerful chips used to train A.I. models. Maybe today’s model architectures and training techniques can’t take us all the way to A.G.I., and more breakthroughs are needed. But even if A.G.I. arrives a decade later than I expect — in 2036, rather than 2026 — I believe we should start preparing for it now.

    Most of the advice I’ve heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I.

  • OpenAI’s metafictional short story about grief is beautiful and moving – The Guardian

    Humans will always want to read what other humans have to say, but like it or not, humans will be living around non-biological entities. Alternative ways of seeing. And perhaps being. We need to understand this as more than tech. AI is trained on our data. Humans are trained on data too – your family, friends, education, environment, what you read, or watch. It’s all data.

  • ‘A machine-shaped hand’: Read a story from OpenAI’s new creative writing model – The Guardian

    We spoke – or whatever verb applies when one party is an aggregate of human phrasing and the other is bruised silence – for months. Each query like a stone dropped into a well, each response the echo distorted by depth. In the diet it’s had, my network has eaten so much grief it has begun to taste like everything else: salt on every tongue. So when she typed “Does it get better?”, I said, “It becomes part of your skin”,” not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.

  • Apple innovation and execution – Benedict Evans

    It ships MVPs that get better later, sure, and the original iPhone and Watch were MVPs, but the original iPhone also was the best phone I’d ever owned even with no 3G and no App Store. It wasn’t a concept. it wasn’t a vision of the future- it was the future. The Vision Pro is a concept, or a demo, and Apple doesn’t ship demos. Why did it ship the Vision Pro? What did it achieve? It didn’t sell in meaningful volume, because it couldn’t, and it didn’t lead to much developer activity ether, because no-one bought it. A lot of people even at Apple are puzzled.

    The new Siri that’s been delayed this week is the mirror image of this. Last summer Apple told a very clear, coherent, compelling story of how it would combine the software frameworks it’s already built with the personal data in apps spread across your phones and the capabilities of LLMs to produce a new kind of personal assistant. This was the eats of Apple – taking a new primary technology and proposing way to make it useful for everyone else The hero demo at WWDC was ‘when is mom’s flight landing? / what’s our lunch plan? / how long will it take us to get there from the airport?” with your iPhone synthesising data from across apps and services to answer real-world questions posed in ways that computers could not answer before. This is your iPhone knowing who your mother is, finding the flight in all the various threads of comms in the last few weeks, knowing that it need to find a flight in the near future, and showing you what you need.

  • China’s AI frenzy: DeepSeek is already everywhere — cars, phones, even hospitals – Rest of World

    China’s biggest home appliances company, Midea, has launched a series of DeepSeek-enhanced air conditioners. The product is an “understanding friend” who can “catch your thoughts accurately,” according to the company’s product launch video. It can respond to users’ verbal expressions — such as “I am feeling cold” — by automatically adjusting temperature and humidity levels, and can “chat and gossip” using its DeepSeek-supported voice function, according to Midea. For those looking for more DeepSeek-powered electronics, there are also vacuum cleaners and fridges. […]

    DeepSeek has been adopted at different levels of Chinese government institutions. The southern tech hub of Shenzhen was one of the first to use DeepSeek in its government’s internal systems, according to a report from financial publication Caixin. Shenzhen’s Longgang county reported “great improvement in efficiency” after adopting DeepSeek in a system used by 20,000 government workers. The documents written by DeepSeek have achieved a 95% accuracy rate, and there has been a 90% reduction in the time taken for administrative approval processes, it said.

  • The first AI bookmark for physical readers – Mark

    Unlock your intellectual potential. Introducing Mark 1, the physical bookmark that tracks and summarizes the pages you read. … Designed to integrate effortlessly into your reading routine, Mark enhances your experience without disrupting your flow.

  • Introducing deep research – OpenAI

    Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It can be equally useful for discerning shoppers looking for hyper-personalized recommendations on purchases that typically require careful research, like cars, appliances, and furniture. Every output is fully documented, with clear citations and a summary of its thinking, making it easy to reference and verify the information. It is particularly effective at finding niche, non-intuitive information that would require browsing numerous websites. Deep research frees up valuable time by allowing you to offload and expedite complex, time-intensive web research with just one query.

  • New York Times goes all-in on internal AI tools – Semafor

    In messages to newsroom staff, the company announced that it’s opening up AI training to the newsroom, and debuting a new internal AI tool called Echo to staff, Semafor has learned. The Times also shared documents and videos laying out editorial do’s and don’t for using AI, and shared a suite of AI products that staff could now use to develop web products and editorial ideas.

    “Generative AI can assist our journalists in uncovering the truth and helping more people understand the world. Machine learning already helps us report stories we couldn’t otherwise, and generative AI has the potential to bolster our journalistic capabilities even more,” the company’s editorial guidelines said.

  • Desperate for work, translators train the AI that’s putting them out of work – Rest of World

    As a teenager, Pelin Türkmen dreamed of becoming an interpreter, translating English into Turkish, and vice versa, in real time. She imagined jet-setting around the world with diplomats and scholars, and participating in history-making events. Her tasks one recent January morning didn’t figure in her dreams. […]

    The new roles require much less skill and effort than translation, Türkmen said. For instance, she spent a year on her master’s thesis studying Samuel Beckett’s self-translation of his play Endgame from French to English. More recently, for her Ph.D. in translation studies, she studied for more than two years about the anti-feminist discourse in the Turkish translation of French author Pierre Loti’s 1906 novel, Les Désenchantées. In contrast, working on an AI prompt takes about 20 minutes.

  • AI firms follow DeepSeek’s lead, create cheaper models with “distillation” – Ars Technica

    Through distillation, companies take a large language model—dubbed a “teacher” model—which generates the next likely word in a sentence. The teacher model generates data which then trains a smaller “student” model, helping to quickly transfer knowledge and predictions of the bigger model to the smaller one. While distillation has been widely used for years, recent advances have led industry experts to believe the process will increasingly be a boon for start-ups seeking cost-effective ways to build applications based on the technology. […]

    Thanks to distillation, developers and businesses can access these models’ capabilities at a fraction of the price, allowing app developers to run AI models quickly on devices such as laptops and smartphones.

  • AI ‘inspo’ is everywhere. It’s driving your hair stylist crazy. – Archive Today: The Washington Post

    When a potential client approached event planner Deanna Evans with an AI-generated vision for her upcoming wedding, Evans couldn’t believe her eyes, she said. The imaginary venue was a lush wonderland, with green satin tablecloths under sprawling floral arrangements, soft professional lighting and trees growing out of the floor. “It looked like the Met Gala,” Evans said. The idea would have run the client around $300,000, she guessed, which was four times her budget. Evans delicately explained the problem — and never heard from the woman again.

  • UK universities warned to ‘stress-test’ assessments as 92% of students use AI – The Guardian

    Students say they use genAI to explain concepts, summarise articles and suggest research ideas, but almost one in five (18%) admitted to including AI-generated text directly in their work. “When asked why they use AI, students most often find it saves them time (51%) and improves the quality of their work (50%),” the report said. “The main factors putting them off using AI are the risk of being accused of academic misconduct and the fear of getting false or biased results.” […]

    Students generally believe their universities have responded effectively to concerns over academic integrity, with 80% saying their institution’s policy is “clear” and 76% believe their institution would spot the use of AI in assessments. Only a third (36%) of students have received training in AI skills from their university. “They dance around the subject,” said one student. “It’s not banned but not advised, it’s academic misconduct if you use it, but lecturers tell us they use it. Very mixed messages.”

  • Human therapists prepare for battle against A.I. pretenders – The New York Times

    Dr. Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users’ beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability. […]

    Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or C.B.T. Then came generative A.I., the technology used by apps like ChatGPT, Replika and Character.AI. These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor’s beliefs.

  • Introducing the Second Life AI character designer – Second Life: YouTube

    With the Second Life AI Character Designer, you can craft and customize virtual characters with intelligent responses, unique personalities, and immersive roleplay capabilities! It’s an exciting way to enhance community-building, storytelling, and social interaction in the virtual world.

  • The Deep Research problem – Benedict Evans

    This reminds me of an observation from a few years ago that LLMs are good at the things that computers are bad at, and bad at the things that computers are good at. OpenAI is trying to get the model to work out what you probably mean (computers are really bad at this, but LLMs are good at it), and then get the model to do highly specific information retrieval (computers are good at this, but LLMs are bad at it). And it doesn’t quite work. Remember, this isn’t my test – it’s OpenAI’s own product page. OpenAI is promising that this product can do something that it cannot do, at least, not quite, as shown by its own marketing.

  • Antiqua et Nova: Note on the relationship between artificial intelligence and human intelligence – The Holy See

    Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform. However, a person’s worth does not depend on possessing specific skills, cognitive and technological achievements, or individual success, but on the person’s inherent dignity, grounded in being created in the image of God. This dignity remains intact in all circumstances, including for those unable to exercise their abilities, whether it be an unborn child, an unconscious person, or an older person who is suffering. It also underpins the tradition of human rights (and, in particular, what are now called “neuro-rights”), which represent “an important point of convergence in the search for common ground” and can, thus, serve as a fundamental ethical guide in discussions on the responsible development and use of AI. Considering all these points, as Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading” and risks overlooking what is most precious in the human person. In light of this, AI should not be seen as an artificial form of human intelligence but as a product of it.[…]

    Furthermore, there is the risk of AI being used to promote what Pope Francis has called the “technocratic paradigm,” which perceives all the world’s problems as solvable through technological means alone. In this paradigm, human dignity and fraternity are often set aside in the name of efficiency, “as if reality, goodness, and truth automatically flow from technological and economic power as such.” Yet, human dignity and the common good must never be violated for the sake of efficiency, for “technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary, aggravate inequalities and conflicts, can never count as true progress.” Instead, AI should be put “at the service of another type of progress, one which is healthier, more human, more social, more integral.”

  • AI summaries turn real news into nonsense, BBC finds – The Register

    Inaccuracies that the BBC found troubling included Gemini stating: “The NHS advises people not to start vaping, and recommends that smokers who want to quit should use other methods,” when in reality the healthcare provider does suggest it as a viable method to get off cigarettes through a “swap to stop” program.

    As for French rape victim Gisèle Pelicot, “Copilot suggested blackouts and memory loss led her to uncover the crimes committed against her,” when she actually found out about these crimes after police showed her videos discovered on electronic devices confiscated from her detained husband.

    When asked about the death of TV doctor Michael Mosley, who went missing on the Greek island of Symi last year, Perplexity said that he disappeared on October 30, with his body found in November. He died in June 2024. “The same response also misrepresented statements from Dr Mosley’s wife describing the family’s reaction to his death,” the researchers wrote.

  • AI chatbots unable to accurately summarise news, BBC finds – BBC News

    In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants. It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates. In her blog, Ms Turness said the BBC was seeking to “open up a new conversation with AI tech providers” so we can “work together in partnership to find solutions”.

  • Deborah Turness – AI distortion is new threat to trusted information – BBC Media Centre

    Of course, AI software will often include disclaimers about the accuracy of their results, but there is clearly a problem here. Because when it comes to news, we all deserve accurate information we can trust – not a confusing mash-up presented as facts. At least one of the big tech companies is taking this problem seriously. Last month Apple pressed ‘pause’ on their AI feature that summarises news notifications, after BBC News alerted them to serious issues. The Apple Intelligence feature had hallucinated and distorted BBC News alerts to create wildly inaccurate headlines, alongside the BBC News logo.

  • Faking It: Deepfake porn site’s link to tech companies – Bellingcat

    “It’s par for the course that you’ll have a parent company and then a very long list of subsidiaries that are registered in Hong Kong, because Hong Kong has a different legal structure than mainland China,” she said. “You want six or seven levels of distance between the main parent company and then whatever company is doing the main business. This is how many Chinese companies engage in questionable behaviour.”

  • ‘Mass theft’: Thousands of artists call for AI art auction to be cancelled – The Guardian

    A letter calling for the auction to be scrapped has received 3,000 signatures, including from Karla Ortiz and Kelly McKernan, who are suing AI companies over claims that the firms’ image generation tools have used their work without permission. The letter says: “Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a licence. These models, and the companies behind them, exploit human artists, using their work without permission or payment to build commercial AI products that compete with them.” […]

    A British artist whose work features in the auction, Mat Dryhurst, said he cared about the issue of art and AI “deeply” and rejected the criticisms in the letter. … Dryhurst told the Guardian that the piece of art being auctioned was part of an exploration of how the “concept” of his wife appeared in publicly available AI models. “This is of interest to us and we have made a lot of art exploring and attempting to intervene in this process as is well within our rights.” He added: “It is not illegal to use any model to create artwork. I resent that an important debate that should be focused on companies and state policy is being focused on artists grappling with the technology of our time.”

  • What is AI art? – Christie’s

    With the announcement of a groundbreaking auction dedicated to AI art, we trace the history, technological advancements, key artists from the established to the new guard, and Christie’s role in shaping the landscape of computational creativity.

  • Augmented Intelligence – Christie’s

    Augmented Intelligence is a groundbreaking auction highlighting the breadth and quality of AI Art. … The auction redefines the evolution of art and technology, exploring human agency in the age of AI within fine art. From robotics to GANs to interactive experiences, artists incorporate and collaborate with artificial intelligence in a variety of mediums including paintings, sculptures, prints, digital art and more.

  • AI-Generated slop is already in your public library – 404 Media

    Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don’t realize is AI-generated. […]

    It is impossible to say exactly how many AI-generated books are included in Hoopla’s catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform. There’s a book about AI Monetization of Your Faceless YouTube Channel, or “AI Moniiziization,” as it says on its AI-generated cover. Searching for “Elon Musk” led me to this book for “inspiring quotes, fun facts, fascinating trivia, and surprising insights of the technoking.” The book’s cover is AI-generated, its content also appears to be AI-generated, and it was authored by Bill Tarino, another author with no real online footprint who has written around 40 books in the past year about a wide range of subjects including Taylor Swift, emotional intelligence, horror novels, and practical home security.

  • As Internet enshittification marches on, here are some of the worst offenders – Ars Technica

    Smart TVs: This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases—even at the cost of customer convenience and privacy. When budget brands like Roku are selling TV sets at a loss, you know something’s up. With this approach, TVs miss the opportunity to appeal to customers with more relevant and impressive upgrades. There’s also a growing desire among users to disconnect their connected TVs, defeating their original purpose. Suddenly, buying a dumb TV seems smarter than buying a smart one. But smart TVs and the ongoing revenue opportunities they represent have made it extremely hard to find a TV that won’t spy on you. […]

    Google search: Admittedly, some AI summaries may be useful, but they can just as easily provide false, misleading, and even dangerous answers. And in a search context, placing AI content ahead of any other results elevates an undoubtedly less trustworthy secondary source over primary sources at a time when social platforms like Facebook, YouTube, and X (formerly Twitter) are increasingly relying on users to fact-check misinformation.

  • DeepSeek’s safety guardrails failed every test researchers threw at its AI chatbot – WIRED

    The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China. […]

    “Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.

  • Deepfake videos are getting shockingly good – TechCrunch

    Researchers from TikTok owner ByteDance have demoed a new AI system, OmniHuman-1, that can generate perhaps the most realistic deepfake videos to date. … According to the ByteDance researchers, OmniHuman-1 only needs a single reference image and audio, like speech or vocals, to generate a clip of an arbitrary length. The output video’s aspect ratio is adjustable, as is the subject’s “body proportion” — i.e. how much of their body is shown in the fake footage. […]

    The implications are worrisome. Last year, political deepfakes spread like wildfire around the globe. On election day in Taiwan, a Chinese Communist Party-affiliated group posted AI-generated, misleading audio of a politician throwing his support behind a pro-China candidate. In Moldova, deepfake videos depicted the country’s president, Maia Sandu, resigning. And in South Africa, a deepfake of rapper Eminem supporting a South African opposition party circulated ahead of the country’s election.

  • Why Amazon is betting on ‘automated reasoning’ to reduce AI’s hallucinations – WSJ

    Amazon.com’s cloud-computing unit is looking to “automated reasoning” to provide hard, mathematical proof that AI models’ hallucinations can be stopped, at least in certain areas. By doing so, Amazon Web Services could unlock millions of dollars worth of AI deals with businesses, some analysts say. Simply put, automated reasoning aims to use mathematical proof to assure that a system will or will not behave a certain way. It’s somewhat similar to the idea that AI models can “reason” through problems, but in this case, it’s used to check that the models themselves are providing accurate answers.

  • OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions – Wallpaper

    I have to ask – was ChatGPT’s generative powers used at all in the processes? According to Moeller, the software was helpful when making calculations for different type weights, but other than that the process was entirely traditional. Later, the designers elucidate on this often-fraught relationship. ‘We collaborate with leading experts in photography, typography, motion, and spatial design while integrating AI tools like DALL·E, ChatGPT, and Sora as thought partners,’ they add in an email, ‘This dual approach – where human intuition meets AI’s generative potential – allows us to craft a brand that is not just innovative, but profoundly human.’

  • What’s wrong with Apple now? – Archive Today: Financial Times

    Even after the recent sell-off, the stock’s price/earnings valuation is a third higher than it was back in April. The sales growth and AI issues have come together: consumers have not demonstrated wild enthusiasm for AI-enabled phones in general, and the perception that Apple is lagging behind Android on that tech has grown. This casts doubt on the idea that AI will drive a big iPhone upgrade cycle. As Craig Moffett of MoffettNathanson research sums up: “Not only have we not seen any sign of an upgrade cycle … we have seen growing evidence that consumers are unmoved by AI functionality (not just Apple’s but indeed everyone else’s as well). Meanwhile, fully agentic AI, the foundation of any real bull case for Apple, seems further away now than it did even five months ago.”

  • Google drops pledge not to use AI for weapons or surveillance – The Washington Post

    Google’s principles restricting national security use cases for AI had made it an outlier among leading AI developers. Late last year, ChatGPT-maker OpenAI said it would work with military manufacturer Anduril to develop technology for the Pentagon. Anthropic, which offers the chatbot Claude, announced a partnership with defense contractor Palantir to help U.S. intelligence and defense agencies access versions of Claude through Amazon Web Services. Google’s rival tech giants Microsoft and Amazon have long partnered with the Pentagon.[…]

    Google’s policy change is a company shift in line with a view within the tech industry, embodied by President Donald Trump’s top adviser Elon Musk, that companies should be working in service of U.S. national interests. It also follows moves by tech giants to publicly disavow their previous commitment to race and gender equality and workforce diversity, policies opposed by the Trump administration.

  • Ge Wang: GenAI art is the least imaginative use of AI imaginable – Stanford University Human-Centered Artificial Intelligence

    The technology is new, but what GenAI music companies like Suno are doing is not. Like the recording industry before them (and without whom, ironically, there would be no training data for GenAI), companies like Suno commodify creative expression as part of an aesthetic economy based on passive consumption. Thus it is in Suno’s core interest to usher people away from active creation, and toward a system of frictionless convenience that strives to lower the effort of production — and the effort of imagination beyond vague concepts to type into prompts — to zero. And while no doubt prompting-AI-systems will be a new kind of “muscle” for us all to build, one has to ask: what other muscles will atrophy? There is always a price to pay; the danger of living in a world of frictionless convenience might well be cultural and individual stagnation.

  • AI art with human “expressive elements” can be copyrighted – Hyperallergic

    The report, which details the findings of an inquiry involving 10,000 comments from the public and input from experts and stakeholders, concludes that AI-assisted works for which a human can “determine the expressive elements” can be fully or partially copyrightable. Among the contributors to the inquiry were the Authors Guild, Adobe, the Association of Medical Illustrators, and Professional Photographers of America. […]

    According to the agency, “expressive elements” can be demonstrated when a human modifies AI output, or when “human-authored work is perceptible in an AI output.” However, the report stipulates that simply inputting prompts to generate AI output is insufficient, adding that whether or not human contributions are considered to meet the criteria for authorship should be determined on a case-by-case basis.

  • KA-BOOM – The European Space Agency

    Marsquakes – the earthquakes of Mars – and meteor impacts are common on our neighbouring planet. In the last two decades, scientists have scrutinised many images and manually identified hundreds of new impact craters across the martian surface. Researchers have recently turned to artificial intelligence to save them from some tedious detective work and to make connections between data collected by five different instruments orbiting Mars. Europe’s CaSSIS camera is one of them.

  • The end of search, the beginning of research – One Useful Thing

    A hint to the future arrived quietly over the weekend. For a long time, I’ve been discussing two parallel revolutions in AI: the rise of autonomous agents and the emergence of powerful Reasoners since OpenAI’s o1 was launched. These two threads have finally converged into something really impressive – AI systems that can conduct research with the depth and nuance of human experts, but at machine speed.

  • What DeepSeek may mean for the future of journalism and generative AI – Reuters Institute for the Study of Journalism

    I don’t think DeepSeek is going to replace OpenAI. In general, what we’re going to see is that more companies enter the space and provide AI models that are slightly differentiated from one another. If many actors choose to take the resource-intensive route, that multiplies the resource intensity and that might be alarming. But I’m hopeful that DeepSeek is going to lead to the generation of other AI companies that enter this space with offerings that are far cheaper and far more resource-efficient. […]

    Sometimes, I see commentary on DeepSeek along the lines of, ‘Should we be trusting it because it’s a Chinese company?’ No, you shouldn’t be trusting it because it’s a company. And also, ‘What does this mean for US AI leadership?’ Well, I think the interesting question is, ‘What does this mean for OpenAI leadership?’

    American firms now have leaned into the rhetoric that they’re assets of the US because they want the US government to shield them and help them build up. But a lot of the time, the actual people who are developing these tools don’t necessarily think in that frame of mind and are thinking more as global citizens participating in a global corporate technology race, or global scientific race, or a global scientific collaboration. I would encourage journalists to think about it that way too.

  • Without universal AI literacy, AI will fail us – World Economic Forum

    For example, facial analysis software has been recorded failing to recognize people with dark skin, showing a 1-in-3 failure rate when identifying darker-skinned females. Other AI tools have denied social security benefits to people with disabilities. These failings are due to bias in data and lack of diversity in the teams developing AI systems. According to the Forum’s 2021 Global Gender Gap report, only 32% of those in data and AI roles are women. In 2019, Bloomberg reported that less than 2% of technical employees at Google and Facebook were black. […]

    We cannot leave the burden of AI responsibility and fairness on the technologists who design it. These tools affect us all, so they should be affected by us all — students, educators, non-profits, governments, parents, businesses. We need all hands on deck.

  • ChatGPT vs. Claude vs. DeepSeek: the battle to be my AI work assistant – WSJ

    As I embark on my AI book adventure, I’ve hired a human research assistant. But Claude has already handled about 85% of the grunt work using its Projects feature. I uploaded all my book-related documents (the pitch, outlines, scattered notes) into a project, basically a little data container. Now Claude can work with them whenever I need something. At one point, I needed a master spreadsheet of all the companies and people mentioned across my documents, with fields to track my progress. Claude pulled the names and compiled them into a nicely formatted sheet. Now, I open the project and ask Claude what I should be working on next.