Found 3 AI tools
Click any tool to view details
JailbreakZoo is a resource library focused on cracking large models (including large language models and visual language models). This project aims to explore the vulnerabilities, exploitation methods and defense mechanisms of these advanced AI models, with the goal of promoting a deeper understanding and awareness of the security aspects of large-scale AI systems.
The Frontier Safety Framework is a set of protocols proposed by Google DeepMind to proactively identify situations in which future AI capabilities may cause serious harm and establish mechanisms to detect and mitigate these risks. The framework focuses on powerful capabilities at the model level, such as superior agent capabilities or complex network capabilities. It is designed to complement our alignment research, which trains models to act in accordance with human values and social goals, as well as Google’s existing AI responsibility and safety practices.
PyRIT is a Python risk identification tool developed by Azure to help security professionals and machine learning engineers proactively discover risks in the AI systems they generate. The tool automates AI red team tasks, allowing operators to focus on more complex and time-consuming tasks while identifying security and privacy compromises.
Explore other subcategories under programming Other Categories
768 tools
465 tools
368 tools
294 tools
140 tools
85 tools
66 tools
61 tools
AI safety Hot programming is a popular subcategory under 3 quality AI tools