Loughborough University
Browse
2402.01822.pdf (7.53 MB)

Building guardrails for Large Language Models

Download (7.53 MB)
preprint
posted on 2024-04-24, 14:18 authored by Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie MengJie Meng, Wenjie Ruan, Xiaowei Huang

As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies. Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology. This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI), and discusses the challenges and the road towards building more complete solutions. Drawing on robust evidence from previous research, we advocate for a systematic approach to construct guardrails for LLMs, based on comprehensive consideration of diverse contexts across various LLMs applications. We propose employing sociotechnical methods through collaboration with a multi-disciplinary team to pinpoint precise technical requirements, exploring advanced neuralsymbolic implementations to embrace the complexity of the requirements, and developing verification and testing to ensure the utmost quality of the final product.

History

School

  • Loughborough University, London

Publisher

Cornell University Arxiv

Version

  • AO (Author's Original)

Publication date

2024-02-02

Notes

This is a pre-print. This article has not been peer-reviewed.

Language

  • en

Depositor

Dr Jie Meng. Deposit date: 9 April 2024

Usage metrics

    Loughborough Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC