Earlier this week, a paper by researchers from New York University showed that an attacker may be able to manipulate deep learning-based artificial intelligence (AI) algorithms.

The researchers said that small equations that can be used as a backdoor can be hidden in deep learning algorithms due to their vast complexity. The backdoor cannot be removed by feeding the AI with more sample data as it will only decrease its accuracy.

They add that the attack scenario is very possible, a hacker can simply use social engineering to gain access to the cloud service and then insert the backdoored model in the massive stack of AI training model equations.

 

Source: Bleeping Computer

Leave a Comment

Your email address will not be published. Required fields are marked *