To combat security threats, Microsoft makes AI models and trains it to sift out security threats by feeding it bugs and other security threats in order to train it.
The tech giant Microsoft is known for its software feats like the Windows operating system and Microsoft Office suite of apps. Over the years it has successfully perfected its products and has ventured into other projects such as AI.
Just like any big tech company, Microsoft to is fighting a war with security flaws and vulnerabilities. To tackle this, At Microsoft, 47,000 developers are generating nearly 30,000 bugs a month and these vulnerabilities get stored across over 100 AzureDevOps and GitHub repositories to quickly spot critical bugs and stay ahead of the hackers from potentially misusing these vulnerabilities.
Scott Christiansen, a senior security programme manager at Microsoft said that large volumes of semi-curated data are apt for machine learning. Microsoft has since 2001 collected 13 million bugs.
Microsoft is using AI to help its developers to locate potential security threats and fix these critical security issues.
"We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 per cent of the time and accurately identifies the critical, high priority security bugs, 97 per cent of the time," said Scott Christiansen.
Microsoft to achieve this made an AI model and then fed it lots of bugs that were labelled security and others that aren't labelled security and let the model sift through. Once the model has adapted and it trained, it can label data that was not pre-classified as a threat. Software developers then have look at the list of threats and patch it. Using an AI to sift out threats saves time as well.
"We discovered that by pairing machine learning models with security experts, we can significantly improve the identification and classification of security bugs," Scott Christiansen noted.
You might like this