Idea about securing data from data theft

Networking/Security Forums -> Databases

Author: breaker PostPosted: Tue Aug 21, 2007 2:41 pm    Post subject: Idea about securing data from data theft
In light of the recent theft of data from (

I thought to myself “how do we prevent the theft of data when the user appears to be legitimate?" (i.e they have logged in)

And I thought what about a deliberate skewing of all of your data? For example:

Table Customer
CustomerID       | name          | email            |
1                | Bill Jimson   | |
2                | Sid Sisdon    |    |

The above is a normal "unskewed" table. Now what if we were apply some sort of formula to the table so that it jumbled a certain field in a certain way.

Let’s say our formula is basic and Jumbles the e-mail address by moving it up one record:
Table Customer
CustomerID       | name          | email            |
-----------------|---------------|----------------- |
1                | Bill Jimson   |    |
2                | Sid Sisdon    | |

All applications that access this data then need to account for the skewing and need to correct it somehow.

This is just something that came to my head. Here are a few drawbacks with it:
1. Extra logic needed to unskew data
2. Why not just encrypt? (Perhaps unskewing would be more innexpensive?)
3. This probably breaks just about every database rule going.

The skewing could have different formulas applied to different columns in the table. Whilst a malicious user posing as a credible user could still get some data back, it would be of significantly less value to someone attempting to use the data in a Phishing attack.

What do you guys think? Is there already something out there like this?

Moderator note: fixed table formatting - capi

Author: GroovicusLocation: Centerville, South Dakota PostPosted: Wed Aug 22, 2007 5:59 pm    Post subject:
I can think of quite a few things wrong with this. The idea behind a database is that each record describes a single entity, and one can use the information to identify relationships between the data. If the data contained within a row no longer applies to that row, how can one possibly hope to guarantee accurate results? If I do a "SELECT email FROM person WHERE name=Bill Jimson", I know that I am going to get his correct email. If it is not correct, that what good does it do me? What if I want a list of all people whose emails addresses end in ""? How many queries will I have to do to get the id's of those who have that email address, then figure out the offset, and then finally get the name that goes with that address? Are you going to guarantee that it is going to be correct 100% of the time?

When doing a deletion, you have to find the record that you want to delete, figure out the other records that may contain all of the other bits of data that go with that one record, alter all of them, and still guarantee that the next time someone queries the database, they get correct results? What do you do with the entries that previously contained data from the now deleted entry? Leave them null? How many re-alignments will you have to do for just one deletion, or just one insertion?

There is also no concept of 'one record up" or "one record down". If I do ten selects of a table with 100 rows, chances are that I will probably get them all back in the order I put them in. Maybe 500 rows. I have tables with tens of thousands of rows that never give me results in the same order. Records are stored in trees, and the trees are periodically rebalanced.

The contract with any database is that you get out the data that you put into it. If you programatically alter records as they go into the database, that contract can no longer be honored, and the data is largely useless to everybody.

Networking/Security Forums -> Databases

output generated using printer-friendly topic mod, All times are GMT + 2 Hours

Page 1 of 1

Powered by phpBB 2.0.x © 2001 phpBB Group