Researchers have developed a way to tamperproof open source large language models to prevent them from being coaxed into, say, explaining how to make a bomb.
from Feed: Artificial Intelligence Latest
Read more on Wired