LLM safety assessment

The definitive guide on avoiding risk and abuses

The generative artificial intelligence (AI) debate has engrossed the software industry and beyond ever since ChatGPT’s reveal in late 2022.

For a year and a half, companies and individuals have rushed to share their thoughts on disruptive generative AI technologies, often glossing over specific merits and risks.

The lack of clarity around these emerging technologies has left many organisations concerned and overwhelmed, with some companies denying usage entirely. Others have permitted it to stay innovative, either allowing for restricted use or brushing off security concerns entirely.

Regardless of the stance taken, generative AI isn’t going away, but it must be implemented and utilised safely. In order for this to happen, security teams must understand how these technologies can be abused.

Related Stories
SIEM is dead - long live security analytics
SIEM is dead - long live security analytics

Well not quite. But it's finally here - a SIEM that gets cloud detection and response.

To build or to buy, that is the question
To build or to buy, that is the question

What does it cost to run an in-house Security Operations Centre (SOC)?

The challenge of unstructured EHRs in the NHS
The challenge of unstructured EHRs in the NHS

How Kings College Hospital restructured its electronic healthcare records with AI.

Cyber guidance for SMEs
Cyber guidance for SMEs

Cut across the noise to create a coherent cyber security strategy

Share this story

Have you seen...
Get in touch

Unlock exclusive updates and special offers! Fill out our contact form to stay connected and be the first to know.