NO ONE PROVIDES MORE DATA PROTECTION METHODS THAN PROTEGRITY.  

MORE OPTIONS. MORE FLEXIBILITY.
COMPLETE DATA PROTECTION.

In the Clear. Monitored. Masked (Dynamic or Static). Encryption (including Format-Preserving Encryption). Vaultless Tokenization. Anonymization. With over 120 US Patents, Protegrity not only offers them all, but we invented many of them. Protegrity enables you to control the method of protection, and to combine methods to match the sensitivity of the data and the use.
request a demo
vaultless tokenization
In its most basic form, tokenization simply substitutes a randomly generated value, a token, for a cleartext value. A lookup table, or a token vault, is kept in a secure place, mapping the cleartext value to the corresponding token. The token data type and length usually remain the same as the cleartext value, and the token lookup table becomes a key, allowing the cleartext value to be retrieved from the token. Tokenization is reversible and an excellent method for protecting individual fields of data in transactional or analytical systems because the data type and length do not change.

Standard tokenization can impact performance, so Protegrity pioneered a more sophisticated form that significantly improves the performance issue while removing the liability of sensitive data residing in a "vault." Called Protegrity "Vaultless" Tokenization, PVT uses small, static token tables to create unique, random token values without the need for a dynamic, vaulted lookup-table. Instead, users benefit from a highly scalable, flexible and powerful protection method for structured and semi-structured data. All the protection, none of the performance drawbacks. And only from Protegrity.
Protegrity Data Security
Protegrity Data Security
encryption (including Fpe)
Data encryption technology uses mathematical algorithm and cryptographic keys to alter data into binary ciphertext. A key with an authorized algorithm will unlock the data, reversing the process. There are many forms of data encryption and various key strengths. Encrypted output in the form of ciphertext is binary data and looks nothing like the original cleartext. Encrypting individual fields (columns) in databases is a challenge, so encryption is typically used at the file level. This method is known as "coarse-grained data protection."

Format-Preserving Encryption (FPE) combines some of the benefits of encryption (using a standards-based mathematical algorithm) and tokenization (preserving the same datatype as the original cleartext). But there is a significant cost in terms of processing power needed to encrypt, and then convert the binary ciphertext back to the original data type which makes this form of data protection best where a standards-based method is desired, but performance is not a factor such as PCI DSS or similar use cases.
static & dynamic data masking
Static Data Masking (SDM) is used to protect data in test and development environments (non-production). The SDM method  masks the data so that it has the appearance of authentic production data, but is not. SDM is often part of a group of solutions known as test data-management.

Dynamic Data Masking (DDM) is used to protect data on the move. It does not change cleartext data at rest. Agents are created to mask all or parts of the data when displayed to unauthorized users who see the information before it reaches authorized users. DDM is a data protection method often used in production environments. But DDM brings considerable risk, because the data at rest remains clear and unprotected.
Protegrity Data Security
Protegrity Data Security
anonymization
Anonymization is a unique data protection method offered by Protegrity, especially in cases where the customer does not need to "re-identify" the data. It is irreversible. Anonymization removes the direct identifiers in data, which results in quasi-identifiers. These are pieces of data that are true about a specific data subject as well as other data subjects (e.g. age, race, gender). With anonymization, we add fake data to a data set or generalize the real data, so it is true but not precise. Once it is "fuzzy" enough, you can no longer identify a person with a given record.

Anonymization is rising in popularity with Protegrity customers, because it falls outside of privacy regulations such as GDPR and CCPA. It is also an exceptional method for secure machine learning and artificial intelligence initiatives. With anonymization, a company can train ML models and achieve the same result without the privacy risk.

"Accelerating Privacy Regulation” was named the #1 risk among executives surveyed for
Gartner’s Q1 2019 Emerging Risk Monitor Report."

view the report
Protegrity Data Security
K-Anonymity
K-Anonymity is a technique used when a company storing sensitive data does not want to use fake data in their data set. K-Anonymity removes direct identifiers so the data is left with quasi-identifiers - extracting away the specifics in the data set. This means that for every record that has a unique set of quasi-identified data, there must be at least "K" other records that have the same quasi-identifiers.
Protegrity Data Security
L-diversity & T-closeness
L-Diversity and T-Closeness are techniques that are applied on top of K-Anonymity. Once a data set is de-identified via K-Anonymity, the next step is to examine the data set and ensure values are sufficiently diverse (using L-Diversity models to further anonymize sensitive values within a group that exhibits a level of homogeneity) and that the distribution of these values is close enough to the distribution in the entire data set (using T-Closeness models).
x
protegrity secure data

how we do it: Methods of Data Protection

There are lots of methods to protect data. Some are quick, easy yet risky. Others are impenitrable like a vault, but slow down availability. You need them all, and Protegrity provides the full range so you can decide when, and where to use which method. This in-depth reference guide  will help  you understand your options.

GET THE GUIDE