- Horizon AI
- Posts
- Google Announces New Image Fact-Checking Tools 🔎
Google Announces New Image Fact-Checking Tools 🔎
+ Chinese Researchers Introduce 'Woodpecker', a Framework to Tackle AI’s Hallucination Problem
Welcome to another edition of Horizon AI,
Google introduces new tools dedicated to countering the spread of visual misinformation. Can these become the new go-to fact-checker tools?
Make sure to check the bottom of this issue to enter this month’s giveaway.
Let's jump into it!
Read Time: 3.5 min
Here's what's new today in the Horizon AI
Google Announces New Image Fact-Checking Tools 🔎
AI Research: Chinese Researchers Introduce 'Woodpecker', a Framework to Tackle AI’s Hallucination Problem
AI Tutorial: Create Canva Visuals Using ChatGPT
AI Image of The Day🎨: Zdzislaw Beksinski paints Marvel film posters
The Latest in AI and Tech đź’ˇ
AI News
Google Announces New Image Fact-Checking Tools 🔎
Source: Google
Misinformation spreads rapidly online, often aided by fake or out-of-context images and videos. To combat this, Google is releasing new tools to help users verify and understand the origins of images.
Details:
Users can now view an image's history, metadata, and how others have described it online.
Approved journalists and fact-checkers can upload images to Google's Fact Check Explorer to surface details and references.
Google's search-based AI will generate information about unfamiliar sites and pages to establish their reputation and credibility. AI-generated images will be clearly labeled.
With the increasing prevalence of generative AI technology, which empowers users to effortlessly generate diverse images, companies are actively developing solutions to prevent its misuse.
AI RESEARCH
Chinese Researchers Introduce 'Woodpecker', a Framework to Tackle AI’s Hallucination Problem
Source: USTC
Researchers at the University of Science and Technology of China and Tencent have developed an innovative new framework called Woodpecker that identifies and corrects hallucinations in multimodal large language models (MLLMs).
Details:
“Hallucination” refers to the phenomenon that the generated text is inconsistent with the image content.
The framework goes through five stages: key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction.
By validating generated text against visual inputs, Woodpecker significantly boosted accuracy in tests.
Hallucinations have been a major issue holding back real-world applications of large multimodal AI models. By correcting inconsistencies between text and images, Woodpecker could make these systems much more reliable.
AI Tutorial
Create Canva Visuals Using ChatGPT
Create logos, banners, posts and more using Canva in ChatGPT:
Note: To use the Canva plugin, you need access to ChatGPT Plus.
Go to the ChatGPT plugin store, search and install the Canva plugin
Describe what you want to create
For a logo you can use a prompt like this: “I am the owner of a [industry] company in [city]. Create a logo that suits it."
This works for any visual, you can also ask it to make videos like Instagram Reels
Modify in Canva
To customize, click on the link above the visual that you prefer. This link will take you to Canva, where you can make your desired edits.
When you are done you can share or download the result.
AI Image of The Day
Zdzislaw Beksinski paints Marvel film posters
Marvel movie posters reimagined in Beksinski's distinctive style.
Source: u/humanyears on Reddit
The latest in AI and Tech
Big Tech's Olive Branch to AI Safety
Source: OpenAI
The Frontier Model Forum, an industry body whose members include Anthropic, Google, Microsoft, and OpenAI, pledged $10 million towards evaluating dangerous capabilities in advanced AI models. The fund is intended to provide assistance to researchers associated with academic institutions, research institutions, and startups.
US Speeds Up New AI Chip Export Ban
Source: Nvidia
In a surprise move, the Biden administration has suddenly accelerated restrictions on exporting high-end AI chips to China and Russia. Originally set to begin in 30 days, the ban now takes effect immediately, catching companies like Nvidia off-guard. This measure aims to block China, Russia, and Iran from acquiring advanced AI chips for potential military use. It also imposes export restrictions on select Middle Eastern nations.
Error-Prone AI Chatbot Dog Misinforms Lonely Japanese Seniors
Source: Dai-chan chatbot
A cartoon dog AI chatbot in Japan designed to inform lonely seniors is drawing ire for inaccurate responses. "Dai-chan" answered wrongly about the 2025 World Expo, G7 meeting dates, and Sapporo Olympics. Backers defend that its purpose is expanding communication, not 100% accuracy. However faulty facts could mislead vulnerable elderly users.
That’s a wrap!
👉 This month we are giving away 3 e-book copies and 2 print copies (print copy for US region only) of Streamlit for Data Science book by Tyler Richards. To enter the giveaway:
Choose the book of your preference |
Not subscribed yet? Sign up here and send it to a colleague or friend!
See you in our next edition!
Gina 👩🏻‍💻