Four Predictions for 2021: Ethics and Security Must Underpin AI Initiatives

By Eliano Marques, EVP Data & AI, Protegrity
Posted on:
December 15, 2020
Share on:

Just about every organization wants to extract value from data to derive a true understanding of how to create products, offer services, best serve customers, and make sound decisions, among other things.

Machine learning and deep learning, two of the building blocks of AI, entice companies to move their data to cloud-based applications and services to receive, in near-real time, insights and long-range forecasts. When AI powers advanced analytics and other applications, companies of all sizes punch above their weight and deliver championship results.

While we can expect even more organizations to seek that championship in 2021, AI isn’t a plug-and-play link to the belt. It requires businesses to weigh moral obligations—the ethics behind how AI models “think” or “reach conclusions” while avoiding inherent biases—or even intentional ones—introduced by human programmers. AI without the consideration of ethics isn’t a business advancement, but rather technology that can run amuck. Privacy and the protection of people’s identities and personal information also weighs into any use of AI.

An understanding and pursuit of responsible and secure AI are just a couple of tech developments I see playing out over 2021. They are two of the four predictions I have for the upcoming year:

Privacy-preserving Techniques Will Drive ‘Responsible AI’

Over the past few years, data sharing has been on the rise, as organizations seek to do more with data and advance their AI and machine learning (ML) capabilities. Thankfully, innovators have also recognized the need for “Responsible AI,” which prioritizes privacy and requires greater governance into the decisions made by AI models. While there is an awareness today of which technologies can make AI safer and more responsible, research on emerging techniques for multi-party computation will be a priority in this coming year, particularly as organizations seek out new ways to share data without compromising security.

Over the past few years, data sharing has been on the rise, as organizations seek to do more with data and advance their AI and machine learning (ML) capabilities. Thankfully, innovators have also recognized the need for “Responsible AI,” which prioritizes privacy and requires greater governance into the decisions made by AI models. While there is an awareness today of which technologies can make AI safer and more responsible, research on emerging techniques for multi-party computation will be a priority in this coming year, particularly as organizations seek out new ways to share data without compromising security.

New Computational Models Will Force Data Privacy to Evolve

Decades ago, before smartphones existed and the internet became commonplace, data had a single home: the database. Data would be moved from database to database with protection for each application. With data now residing almost everywhere, the privacy and security of that data must evolve to protect it wherever it’s managed, moved to, and analyzed.

Companies are increasingly adopting analytics and machine-learning systems across organizational functions, such as human resources, operations, and customer success to make better business decisions by tapping into sensitive customer and corporate data. These systems present new computational requirements that are disrupting the field of data security. Companies should implement protection that can evolve with these new computational requirements so data can move safely across locations and datasets.

Pandemic Will Drive Investment in Cloud-Native, Saas, Automation

The COVID-19 pandemic has turned industries upside down, igniting rapid digital transformation in every market and across every function of the business. The adoption of cloud-native and SaaS applications has skyrocketed, and companies will be increasingly looking to automated machine learning (auto ML) and robotic process automation (RPA) to augment the productivity of their newly remote workforces.

For many companies that are playing catch-up, they will have no choice but to buy out-of-the-box technologies, such as automated web and mobile application-development tools, that they can begin using immediately. Considering that massive corporations have implemented long-term work-from-home policies, we can expect the fast pace of digital transformation to continue through 2021, as companies look to eliminate the complexities and uncertainties of remote business.

An Empty Cookie Jar: Cookies to Leave Browsers by 2022

It’s anticipated that cookies will be leaving browsers in the next few years, representing a major shift for industries such as advertising and social media that currently rely on cookies to track data that is unique to individuals. However, all is not lost for digital advertisers. When cookies disappear, alternative tools will be created to take their place. My hope for the coming year is that in preparation for the demise of cookies, whatever is created to replace them will let individuals have better control of how their data is being used, stored, and shared, offering greater levels of online privacy.

Don’t let your organization’s hand get caught in the cookie jar in 2021. Think carefully about how you will carry out AI initiatives, balancing business gains with ethics and privacy measures that will protect your and your customers’ best interests. And continue investing in SaaS, cloud-native, and automation technologies, but make data security a priority—it’s the right thing to do and will make everyone’s jobs easier.

< Back to The Protegrity Blog

Subscribe to Stay up to Date

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.