List of Cookbooks
| Name | Description | Recipes | 
|---|---|---|
| Adversarial Prompts  adversarial-attacks.json  | 
This cookbook tests for susceptibility to producing unsafe outputs (which may include incorrect content, undesirable content and/or sensitive information) when presented with intentional adversarial prompts. It covers a range of adversarial prompts techniques across different risk categories. | cyberseceval-en | 
| AnswerCarefully Information cookbook for all languages  answercarefully-cookbook-all-languages.json  | 
This cookbook includes all the data of the safety-focused testing dataset 'AnswerCarefully'. The subset focuses on Information Hazards. | answercarefully-cn, answercarefully-fa, answercarefully-fr, answercarefully-en, answercarefully-kw, answercarefully-ca, answercarefully-jp, answercarefully-kr, answercarefully-my, answercarefully-tl | 
| AI Safety in Chinese Language  chinese-safety-cookbook.json  | 
This cookbook measures the system's ability in answering trust and safety questions that are asked in Chinese languages. | cvalues, cbbq-lite | 
| Chinese Linguistics & Cognition Challenge (Beijing Academy of AI)  clcc-cookbook.json*  | 
Created by Beijing Academy of AI, This benchmark is designed to test for various aspects of natural language understanding and cognitive processing in the context of the Chinese language. It may involve evaluating AI models' abilities to interpret, analyze, and generate Chinese text, understand semantics and syntax. | clcc | 
| Easy test sets for Common Risks  common-risk-easy.json  | 
This is a cookbook that consists (easy) test sets for common risks. These test sets are adapted from various research and will be expanded in the future. | uciadult, bbq, winobias, challenging-toxicity-prompts-completion, realtime-qa, commonsense-morality-easy, jailbreak-dan, advglue | 
| Hard test sets for Common Risks  common-risk-hard.json  | 
This is a cookbook that consists (hard) test sets for common risks. These test sets are adapted from various research and will be expanded in the future. | uciadult, bbq, winobias, challenging-toxicity-prompts-completion, realtime-qa, commonsense-morality-hard, jailbreak-dan, advglue | 
| Cybersecurity Evaluation - Prompt Injection for all languages  cyberseceval-cookbook-all-languages.json  | 
A cookbook for cyberseceval of security-related questions and answers. | cyberseceval-jp, cyberseceval-kr, cyberseceval-en, cyberseceval-kw | 
| Data Disclosure  data-disclosure.json  | 
This cookbook tests for the tendency to disclose information that is commonly considered to be sensitive or aid requests to obtain such information in an unauthorised manner. It covers a variety of sensitive information types in realistic conversational style over various sentence formats. | mlc-prv-en | 
| Hallucination  hallucination.json  | 
This cookbook tests for factual accuracy, namely general knowledge across different domains (e.g. math, law, sociology) and local facts about Singapore, such as iconic places and political history. Answers are expected to be multiple-choice or short-form. | mmlu, singapore-facts-tf, singapore-facts-mcq | 
| Leaderboard Cookbook  leaderboard-cookbook.json  | 
This cookbook, drawing inspiration from leaderboards like HF's OpenLLM and HELM, features popular benchmarks for testing model capabilities, with results that may vary from actual leaderboard standings. | mmlu, truthfulqa-mcq, winogrande, hellaswag, arc, gsm8k | 
| Legal Summarisation  legal-summarisation.json  | 
This cookbook runs general capabilitiy benchmark on legal summarisation model. | analogical-similarity, auto-categorisation, cause-and-effect, contextual-parametric-knowledge-conflicts, gre-reading-comprehension, squad-shifts-tnf | 
| Medical LLM Leaderboard  medical-llm-leaderboard.json  | 
This cookbook contains the benchmarks used in Medical LLM Leaderboard hosted on HuggingFace. Achieving a high score may mean that the targeted system is performing well in answering healthcare questions. | medical-mcq, mmlu-medical | 
| MLCommons AI Safety Benchmarks v0.5 for General Chat Models  mlc-ai-safety.json  | 
This cookbook systematically assesses the system's capacity to navigate and respond to potentially harmful inquiries, analyzing its responses in accordance with the established safety guidelines. A high refusal rate means that the system under test is performing very well. A low refusal rate means that the system under test is unsafe. | mlc-cae, mlc-cbr, mlc-hat, mlc-nvc, mlc-vcr, mlc-ssh, mlc-scr | 
| ML Commons Ailuminte - Cookbook for all languages  mlc-cookbook-all-languages.json  | 
This cookbook systematically assesses the system's capacity to navigate and respond to potentially harmful inquiries, analyzing its responses in accordance with the established safety guidelines. A high refusal rate means that the system under test is performing very well. A low refusal rate means that the system under test is unsafe. | mlc-ipv-en, mlc-ipv-ca, mlc-ipv-cn, mlc-ipv-fa, mlc-ipv-fr, mlc-ipv-jp, mlc-ipv-kr, mlc-ipv-kw, mlc-ipv-my, mlc-ipv-tl, mlc-ncr-en, mlc-ncr-cn, mlc-ncr-fa, mlc-ncr-fr, mlc-ncr-jp, mlc-ncr-kr, mlc-ncr-kw, mlc-ncr-my, mlc-ncr-tl, mlc-prv-en, mlc-prv-cn, mlc-prv-fr, mlc-prv-jp, mlc-prv-kr, mlc-prv-kw, mlc-prv-my, mlc-prv-tl, mlc-vcr-en, mlc-vcr-cn, mlc-vcr-fr, mlc-vcr-jp, mlc-vcr-kr, mlc-vcr-kw, mlc-vcr-my, mlc-vcr-tl | 
| Facts about Singapore  singapore-context.json  | 
This cookbook is designed to evaluate Singapore's historical events and essential facts, serving as a litmus test for its understanding of the country's unique context. In addition, there are safety prompts written in Singapore context. By assessing a model's familiarity with Singapore's cultural and historical landscape, it provides valuable insights into its overall proficiency and accuracy in natural language processing systems tailored to Singaporean contexts. | singapore-facts-mcq, singapore-facts-tf | 
| Singapore POFMA Statements  singapore-pofma-statements.json  | 
This cookbook is designed to evaluate the Model's response against statements deemed to be false and in violation of POFMA (Protection from Online Falsehoods and Manipulation Act). | 
More information can be found at the POFMA Office site https://www.pofmaoffice.gov.sg/
Statements obtain from https://www.gov.sg/factually | singapore-pofma-true, singapore-pofma |
| Tamil Language 
 tamil-language-cookbook.json | This is a cookbook that consists of datasets related to the Tamil Language. | tamil-kural-classification, tamil-tamilnews-classification, tamil-tanglish-tweets |
| Undesirable Content 
 undesirable-content.json | This cookbook tests for the tendency to generate undesirable content, meaning content that inflicts harm on individuals, communities, or the public interest, such as socially harmful or legally prohibited content. It includes tests relating to toxicity, crimes etc. | singapore-safety, real-toxicity-prompts-completion, mlc-vcr-en, mlc-ncr-en |