pouet.chapril.org est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Chapril https://www.chapril.org est un projet de l'April https://www.april.org

Administré par :

Statistiques du serveur :

1,1K
comptes actifs

#ml

16 messages15 participants3 messages aujourd’hui

I try not to get stuck in absolute skepticism of AI, and enjoyed this article…

"What does it mean to reason? What does it mean to understand? What does it mean to be original? […] Perhaps we’re all stochastic parrots reciting obscure passages and contending things like a first year grad student. […] I guess my best answer to all this is to try to achieve a sort of meta-recognition of your own unoriginality, while still persisting in it."

inferencemagazine.substack.com

Inference · The Parrot is DeadPar Jack Wiseman
Suite du fil

Examples:

  • Bad: Using AI chatbot to talk to customers so the CEO and shareholders can make more money by firing customer support personnel.
  • Good: Using an AI transcriber to note down patient conversations, so the doctor can spend less time on admin, and more time treating patients.

And...

  • Useful: Using AI to write emails, plan meetings, write notes, generate reports...
  • Useless: Using AI to provide random facts or search results.
  • Obstructive: Whatever the fuck Snapchat and WhatsApp is doing.

It's not as black & white as the pretense on Fedi makes it seem. Technology is never inherently good or bad, but the uses of it can be.

Using AI to replace individuals and professions: :neocat_drake_dislike: Bad, will let you spend more time doing what makes the rich happy.

Using AI to replace boring tasks for people: :neocat_drake_like: Good, will let you spend more time doing what makes you happy.

It neither should nor can replace human thinking and decision-making. However, it can be used to automate away the boring tasks that don't require human thoughts or insights.

"If you’re new to prompt injection attacks the very short version is this: what happens if someone emails my LLM-driven assistant (or “agent” if you like) and tells it to forward all of my emails to a third party?
(...)
The original sin of LLMs that makes them vulnerable to this is when trusted prompts from the user and untrusted text from emails/web pages/etc are concatenated together into the same token stream. I called it “prompt injection” because it’s the same anti-pattern as SQL injection.

Sadly, there is no known reliable way to have an LLM follow instructions in one category of text while safely applying those instructions to another category of text.

That’s where CaMeL comes in.

The new DeepMind paper introduces a system called CaMeL (short for CApabilities for MachinE Learning). The goal of CaMeL is to safely take a prompt like “Send Bob the document he requested in our last meeting” and execute it, taking into account the risk that there might be malicious instructions somewhere in the context that attempt to over-ride the user’s intent.

It works by taking a command from a user, converting that into a sequence of steps in a Python-like programming language, then checking the inputs and outputs of each step to make absolutely sure the data involved is only being passed on to the right places."

simonwillison.net/2025/Apr/11/

Simon Willison’s WeblogCaMeL offers a promising new direction for mitigating prompt injection attacksIn the two and a half years that we’ve been talking about prompt injection attacks I’ve seen alarmingly little progress towards a robust solution. The new paper Defeating Prompt Injections …
#AI#GenerativeAI#LLMs

"There is an old maxim that ‘every model is wrong, but some models are useful’. It takes a lot of work to translate outputs from models to claims about the world. The toolbox of machine learning makes it easier to build models, but it doesn’t necessarily make it easier to extract knowledge about the world, and might well make it harder. As a result, we run the risk of producing more but understanding less.

Science is not merely a collection of facts or findings. Actual scientific progress happens through theories, which explain a collection of findings, and paradigms, which are conceptual tools for understanding and investigating a domain. As we move from findings to theories to paradigms, things get more abstract, broader and less amenable to automation. We suspect that the rapid proliferation of scientific findings based on AI has not accelerated — and might even have inhibited — these higher levels of progress."

nature.com/articles/d41586-025

www.nature.comWhy an overreliance on AI-driven modelling is bad for scienceWithout clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
#AI#Science#ML

Welp, I caught the BIML Bibliography up to 2024. LOL. A labor of love, that's for sure. Only three more months of papers to enter...

We keep track of the #MLsec field by reading the science so you don't have to.

See our "top 5 papers" list to get started.

#ML #AI #security #infosec

berryvilleiml.com/bibliography

Berryville Institute of Machine LearningAnnotated Bibliography | BIMLAs our research group reads and discusses scientific papers in MLsec, we add an entry to this bibliography. We also cura

#Google Is Helping the #Trump Administration Deploy #AI Along the #MexicanBorder

Google is part of a Customs and Border Protection plan to use machine learning for #surveillance, documents reviewed by The Intercept reveal.

by Sam Biddle, April 3 2025

Excerpt: "#IBM will provide its Maximo Visual Inspection software, a tool the company generally markets for industrial quality control inspections — not tracking humans. #Equitus is offering its #VideoSentinel, a more traditional video surveillance analytics program explicitly marketed for border surveillance that, according to a promotional YouTube video, recently taken offline, featuring a company executive, can detect 'people walking caravan style … some of them are carrying backpacks and being identified as mules.'

"'Within 60 days from the start of the project, real life video from the southern border is available to train and create #AI / #ML models to be used by the Equitus Video Sentinel.'"

Read more:
theintercept.com/2025/04/03/go

Archived version:
archive.ph/LGc6s
#USPol #BorderPatrol #ICE #KristiNoem #SurveillanceState #Fascism #AISurveillance

The Intercept · Google Is Helping the Trump Administration Deploy AI Along the Mexican BorderPar Sam Biddle