In its most basic form, data tokenization
simply substitutes a randomly generated value, a token, for a cleartext value. A lookup table, or a token vault, is kept in a secure place, mapping the cleartext value to the corresponding token. The token data type and length usually remain the same as the cleartext value, and the token lookup table becomes a key, allowing the cleartext value to be retrieved from the token. Tokenization of data is reversible and an excellent method for protecting individual fields of data in transactional or analytical systems because the data type and length do not change.
Standard tokenization can impact performance, so Protegrity pioneered a more sophisticated form that significantly improves the performance issue while removing the liability of sensitive data residing in a "vault." Called Protegrity "Vaultless" Tokenization, PVT uses small, static token tables to create unique, random token values without the need for a dynamic, vaulted lookup-table. Instead, users benefit from a highly scalable, flexible and powerful protection method for structured and semi-structured data. All the protection, none of the performance drawbacks. And only from Protegrity.