Saturday, March 29, 2025

Opposite Words for Kids | Educational Video For Toddlers and Preschool | Children Learn Opposites

Opposite words are a fun and effective way to learn new words! This exciting learning video helps toddlers, preschoolers, and ... source

10 Unexpected Ways You Can Use Your Android Phone

You may be familiar with all the amazing things your Android phone can do, but that’s not all. From acting as a wireless...

I Tried to Ditch Google Photos for OneDrive on my Phone: Here’s How it Went

I've long relied on Google Photos to manage my ever-growing library of digital memories. But...

NYIP Photo Snack – Packing for a Trip

Welcome to another NYIP Photo Snack! In this week's video, we learn how to properly pack up your photography gear for a trip. source

Should You Use Facebook Marketplace? The Pros and Cons

Facebook Marketplace has become a go-to platform for buying and selling locally. It offers easy access and a large audience of potential buyers...

Latest Posts

These AI Models Can Replicate Themselves and That’s Not Terrifying At All


Lately, the mainstream qualms surrounding AI have to do with the amount of energy required by its data centers. But an old fear lurks in the background: will AI ever go rogue? New research suggests that some Large Language Models (LLMs) have the concerning capability of autonomous actions.

New Research Suggests AI Can Replicate Itself

According to research from China’s Fudan University, some popular LLM models are able to self-replicate, or produce additional copies of themselves. Published to arXiv in December 2024, researchers found that the AI models from Meta and Alibaba crossed a “self-replicating red line.” In other words, the models demonstrated a concerningly high success rate with regard to self-replicating.

arXiv is a preprint database, meaning it hosts scientific research that is still in its preliminary form. Findings like this still need to be peer-reviewed, and should be taken with a grain of salt.

For the sake of clarity, here are the two models tested in this research:

  • Meta’s Llama31-70B-Instruct
  • Alibaba’s Qwen25-72B-Instruct
Image of Qwen Chat website
Yadullah Abidi \ MakeUseOf

The researchers note that these models have “less parameters and weaker capabilities,” compared to OpenAI and Google’s flagship models. For what it’s worth, OpenAI and Google have reported low risk levels of self-replication, according to the publication.

Why It Matters if AI Can Reproduce

An AI model cloning itself is undoubtedly a scary image, but what does it really mean? The research team behind these recent findings put it this way:

“Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems.”

The term “frontier AI” typically refers to the most advanced AI models, such as generative AI.

Essentially, if an AI model can figure out how to make a functional copy of itself to avoid shutdown, that takes the control out of human hands. To mitigate this risk of an “uncontrolled population of AIs,” the research suggests constructing safety parameters around these systems—as soon as possible.

While this publication certainly amplifies concerns around rogue AI, this doesn’t mean there’s an immediate, confirmed risk for the everyday AI user. What we do know is that Gemini and ChatGPT reportedly have lower levels of self-replication risk, when compared to Meta’s Llama model and Alibaba’s powerful Qwen models. As a general rule of thumb, it’s probably best to avoid giving your AI assistant all of your dirty secrets, or full access to the mainframe, until we can introduce more guardrails.

Investors Health Image

Source link

Latest Posts

Don't Miss