Security concern is a top barrier to AI implementation

Written by modzy | Published 2021/08/09
Tech Story Tags: modzy | ai-model-security | data-scanning-solutions | modelops | enhances-backpropagation | ai | security | good-company

TLDR Modzy is charting the path forward for a new level of AI performance and ensuring AI model security. Our patented adversarial defense solution ensures your models are robust against attacks, scans data, maintains model integrity against poisoned data, and keeps models safe against stealing attempts. Additionally, our model watermarking solutions allow you to validate provenance information for models running in production. The company is offering a software platform for organizations and developers to responsibly deploy, monitor, and get value from AI - at scale.via the TL;DR App

AI is brittle. It can be fooled. Threats to accuracy and performance of your models are lurking in unsuspected parts of your pipeline.
AI used in critical business systems must be secure against attempts to generate misinformation or degrade model performance. Modzy is charting the path forward for a new level of AI performance and ensuring AI model security. Our patented adversarial defense solution ensures your models are robust against attacks, scans data, maintains model integrity against poisoned data, and keeps models safe against stealing attempts. Additionally, our model watermarking solutions allow you to validate provenance information for models running in production.
Security is often cited as a top barrier to AI implementation. Yet, many organizations haven’t adopted a comprehensive approach for securing AI in production environments, or addressing nuances related to AI model security. Most production systems don’t even have a process to check or validate the source information for models running.

Adversarial Solutions

Keeping Models Robust

  • Allows models to learn and make decisions in a manner similar to that of humans
  • Allows models to make predictions in unfamiliar environments and under the possibility of adversarial attacks
  • Enhances the well-known backpropagation algorithm commonly used across industry to train deep learning models
  • Models are trained to rely on a holistic set of features learned from the input when making predictions

Data Scanning Solution

  • Identify adversarial inputs during both training and inference across different datasets and domains
  • First method that utilizes the property where attacks can transfer between different models in both simulated and production environments to detect the adversarial inputs in the datasets before they are fed to the deep learning model

Written by modzy | A software platform for organizations and developers to responsibly deploy, monitor, and get value from AI - at scale.
Published by HackerNoon on 2021/08/09