Many concerns about AI (Artificial Intelligence) have to do with the need to protect the data the AI is supposed to analyze. But cloud security can hardly do without AI anymore. Instead of transferring the data to the AI service, the AI could come to the cloud data. In practice, this is already happening.
The digital association Bitkom has repeatedly warned of the adverse effects of the General Data Protection Regulation ( GDPR ). Companies are, therefore, more often forced to stop innovation projects because they have to implement the requirements of the GDPR.
Specifically, in every second company, this concerns the establishment of data pools (52 percent), in 38 percent, the use of new data analysis tools, and in 37 percent, the use of cloud services. Around every third company (34 percent) was set back by new software when it came to innovations for the digitization of business processes, and 33 percent when using new technologies such as AI. Many companies see privacy as a barrier to the cloud and AI, but that doesn’t have to be the case. There are ways to balance privacy, the cloud, and AI.
AI Needs Data And Privacy
Training, testing, and validating machine learning models is known to require data. But instead of transferring the data to the AI system to be trained, one can, for example, rely on federated learning, a relatively new way of developing models for machine learning.
The companies that want to train the AI send their model parameters to a central server, which uses them to train a central model, which sends updated parameters back to the companies. If the data is in a cloud, the AI model to be trained is executed in the company’s cloud itself; the model parameters are then sent to the central AI cloud and not the company’s cloud data. An exciting model for cloud data protection that can also help with cloud security.
Cloud Security Needs AI
The Cloud Security Alliance (CSA) clarifies: “As we move into the future of automation, AI is proving to be a critical part of cyber and cloud security. The ability to learn at the speed that AI produces makes it extremely important to prioritize discovering the potential ways in which AI can both support security and define ways in which standardization can be designed around its proper use.
Again, data protection plays an important role. CSA explains: Based on legal compliance requirements, the security information may need to be obfuscated or sanitized to protect personally identifiable information. AI can perform the required analysis on obfuscated data streams where simple fields like usernames and IP addresses have been hashed to protect information.
But there are other ways to strengthen data protection when using AI to secure cloud services: The AI can come to the cloud data and environment to be protected.
For Example, BlueVoyant, And Microsoft Azure
The importance of cloud security is evident: cybercriminals target more than just traditional endpoints. As a result, endpoint-centric detection and response solutions alone do not provide the visibility and responsiveness needed to identify and neutralize these broader attacks.
That is why Security Operation Centers (SOC), which evaluate the security-relevant information from endpoints, networks, and clouds, look for signs of incidents, and initiate responses, are central elements of many security strategies.
The so-called BlueVoyant Modern SOC combines managed services from the security provider BlueVoyant with the Azure Sentinel and XDR functions from Microsoft via 365 Defender and Azure Defender. It offers Microsoft users a combined cyber security solution directly in their Azure Sentinel environment.
Data Remains In The Customer’s Cloud Environment
Unlike other Managed Security Services (MSSP) solutions that require customers to send their log data and other security-related information to other clouds or traditional data centers, BlueVoyant gives organizations the ability to store customer data in their Azure environments using the BlueVoyant Modern SOC. Sentinel environments remain.
According to the vendor, this secures mission-critical data and assets, reduces costs, and increases compliance. The security-relevant cloud data is therefore evaluated in the customer’s cloud environment. The security algorithms, therefore, come to the cloud data and not the cloud data to the security solutions. This can be a significant relief for cloud data protection.
Like federated learning, the customer’s cloud environments do not pass on the data, only the collected threat knowledge, to then benefit from the threat intelligence (threat intelligence) of all cloud environments that are monitored accordingly.