Implementing GDPR Data Subject Rights: Access, Deletion, and Portability
A technical deep-dive into implementing all eight GDPR data subject rights, including erasure across primary databases and backups, identity verification, and meeting the 30-day response deadline.
GDPR Chapter III grants individuals eight distinct rights over their personal data. Most engineering teams focus on the right of erasure (the "right to be forgotten") because it is the most operationally complex, but a compliant implementation must handle all eight rights, each with its own technical requirements, timelines, and exceptions. This guide covers the technical implementation of each right and the infrastructure decisions that make compliance sustainable at scale.
The Eight Rights and Their Technical Implications
Before diving into implementation, here is a precise summary of each right and what it demands from your systems:
1. Right of Access (Article 15): The data subject can request a copy of all personal data you hold about them, plus metadata about the processing (purposes, recipients, retention periods, source of data). Response deadline: one month, extendable by two months for complex requests.
2. Right to Rectification (Article 16): Individuals can request correction of inaccurate or incomplete personal data. This extends to processors — you must propagate corrections downstream.
3. Right to Erasure (Article 17): The "right to be forgotten." Must erase personal data when consent is withdrawn, when data is no longer necessary for its original purpose, or when the individual objects and no overriding legitimate interest exists.
4. Right to Restriction of Processing (Article 18): While a dispute is in progress or pending deletion verification, processing must be restricted to storage only. Data must not be used for analytics or shared with third parties.
5. Right to Data Portability (Article 20): Provide data in a structured, commonly used, machine-readable format (JSON or CSV are both acceptable). Applies only to data provided by the individual and processed by automated means under consent or contract.
6. Right to Object (Article 21): Individuals can object to processing based on legitimate interests or for direct marketing. For direct marketing, you must stop processing immediately with no exceptions.
7. Rights Related to Automated Decision-Making (Article 22): Individuals have the right not to be subject to solely automated decisions with significant legal or similarly significant effects, including profiling.
8. Right to Withdraw Consent (Article 7(3)): Withdrawal must be as easy as giving consent and must not affect the lawfulness of prior processing.
Building a Data Subject Request (DSR) Intake System
The first requirement is a consistent intake mechanism. Ad-hoc requests via support tickets are unmanageable at scale and create audit trail gaps. Build a dedicated DSR portal or intake form that:
- Collects the request type, identity information, and contact details
- Issues a unique request ID
- Records the timestamp (the 30-day clock starts from receipt)
- Sends an acknowledgment to the requester
A minimal DSR record schema:
interface DataSubjectRequest {
requestId: string; // UUID
type: 'access' | 'erasure' | 'portability' | 'rectification' | 'restriction' | 'objection';
status: 'received' | 'verifying' | 'processing' | 'completed' | 'rejected';
receivedAt: Date;
deadlineAt: Date; // receivedAt + 30 days
extendedDeadlineAt?: Date; // +60 days if complex
subjectEmail: string;
subjectName?: string;
verificationStatus: 'pending' | 'verified' | 'failed';
verificationMethod?: 'email_otp' | 'account_login' | 'document';
completedAt?: Date;
rejectionReason?: string;
auditLog: AuditEntry[];
}
Identity Verification
You must verify the requester's identity before fulfilling any DSR, but you cannot require disproportionate information. The key principle: use the minimum data necessary to confirm identity.
For authenticated users (who have an account), the strongest approach is requiring them to initiate the request while logged in. This inherits your existing authentication assurance level.
For unauthenticated requests (e.g., someone who deleted their account), consider:
- Email OTP verification to the address on file
- Asking for the last four digits of a payment method
- Requesting a piece of information only a legitimate account holder would know
Document your verification decisions. If you reject a request due to failed verification, the rejection itself and the reason must be communicated within one month.
async function initiateVerification(requestId: string, email: string): Promise<void> {
const otp = generateSecureOTP(6);
const expiresAt = new Date(Date.now() + 15 * 60 * 1000); // 15 minutes
await storeOTP({ requestId, otpHash: hashOTP(otp), expiresAt });
await sendVerificationEmail(email, otp, requestId);
}
async function verifyOTP(requestId: string, submittedOtp: string): Promise<boolean> {
const stored = await getOTP(requestId);
if (!stored || stored.expiresAt < new Date()) return false;
const valid = timingSafeCompare(hashOTP(submittedOtp), stored.otpHash);
if (valid) {
await markRequestVerified(requestId);
}
return valid;
}
Implementing the Right of Access
The access request response must include:
- All personal data fields (not just the obvious ones — include derived data, behavioral logs, support ticket content)
- The purposes for which each category is processed
- The legal basis for processing
- Retention periods
- Any third parties the data has been shared with
- The source of the data if not collected directly
Build a "data collector" service that queries each data store and assembles the response:
async function collectUserData(userId: string): Promise<UserDataExport> {
const [
profileData,
activityLogs,
supportTickets,
paymentRecords,
analyticsEvents,
marketingInteractions
] = await Promise.all([
db.users.findOne({ userId }),
db.activityLogs.find({ userId }),
db.supportTickets.find({ userId }),
paymentService.getTransactionHistory(userId),
analyticsService.getEventHistory(userId),
marketingService.getEmailHistory(userId)
]);
return {
profile: sanitizeForExport(profileData),
activity: activityLogs,
support: supportTickets,
payments: redactSensitivePaymentData(paymentRecords),
analytics: analyticsEvents,
marketing: marketingInteractions,
generatedAt: new Date(),
format: 'JSON'
};
}
Deliver this as a downloadable JSON or CSV file, not a web page. The data subject must be able to save and re-use it.
Implementing the Right to Erasure
Erasure is the most technically complex right because personal data is scattered across:
- Primary databases (user records, associated records)
- Backups (database snapshots, point-in-time recovery)
- Analytics systems (data warehouses, event streams)
- Third-party processors (email marketing, CRM, support tools)
- Log files (application logs, access logs, error tracking)
- CDN and edge caches
- Derived data (computed fields, ML training sets)
Primary Database Erasure
A naive approach of deleting the user record often breaks referential integrity and loses audit trails you may be legally required to keep (financial records, for example, must be retained for several years under tax law).
The recommended pattern is pseudonymization at erasure time: replace personal data fields with a tombstone value and store only the anonymized record.
async function eraseUserData(userId: string): Promise<void> {
const tombstone = {
email: `deleted_${userId}@erased.invalid`,
name: '[Deleted]',
phone: null,
address: null,
ipAddress: null,
deviceFingerprints: [],
profilePicture: null,
erasedAt: new Date(),
erasureReason: 'gdpr_request'
};
await db.users.updateOne({ userId }, { $set: tombstone });
// Erase from associated collections where user identity is stored
await db.activityLogs.updateMany(
{ userId },
{ $set: { ipAddress: null, userAgent: null, location: null } }
);
// Hard-delete records with no legitimate retention requirement
await db.marketingPreferences.deleteOne({ userId });
await db.sessionTokens.deleteMany({ userId });
}
Backup Erasure
This is where many teams stumble. GDPR recital 65 acknowledges that backups need not be erased immediately — but you must ensure that when a backup is restored, the erasure is re-applied. Two common approaches:
Approach 1: Erasure log with restore-time application. Keep a log of all erasure requests. When a backup is restored, run the erasure log against the restored database before it goes live. This is the most practical approach for most systems.
// Store erasure record separately from the main database
await erasureLog.insertOne({
userId,
erasedAt: new Date(),
fieldsErased: Object.keys(tombstone),
requestId
});
Approach 2: Backup encryption with key deletion. Encrypt backups with per-user keys. Deleting the key effectively makes that user's data in old backups unrecoverable. This is elegant but requires significant infrastructure investment upfront.
Log File Erasure
Application logs (structured logs sent to Datadog, Splunk, CloudWatch, etc.) frequently contain personal data: email addresses in login logs, IP addresses, user IDs that can be linked to the erased record. Your log pipeline needs:
- A log retention policy that automatically expires logs beyond a defined period (90 days is common)
- Scrubbing of email addresses and names from log messages at ingestion time (use user IDs in logs, not emails)
- A process to submit erasure requests to your log management tool if it supports it (some do, many don't)
Third-Party Processor Notification
Under GDPR Article 17(2), when you erase data, you must notify all processors and controllers to whom you have disclosed the data. Build a checklist of processors and integrate erasure notifications:
Common processors requiring notification:
- Email marketing platform (Mailchimp, SendGrid, HubSpot)
- CRM (Salesforce, HubSpot)
- Customer support (Zendesk, Intercom)
- Analytics (Mixpanel, Amplitude — both have GDPR deletion APIs)
- Error tracking (Sentry has a GDPR deletion endpoint)
async function notifyProcessors(userId: string, email: string): Promise<void> {
await Promise.allSettled([
mailchimp.deleteMember(email),
mixpanel.deleteUser(userId),
amplitude.deleteUser(userId),
sentry.deleteUser(userId),
intercom.deleteUser(email)
]);
}
Use Promise.allSettled (not Promise.all) so a failure in one processor does not block the others. Log each result for your audit trail.
Implementing Data Portability
The portability export must be machine-readable and structured. JSON is the standard choice. The export should be scoped to data the individual provided (profile data, uploaded content, preferences, explicitly provided behavioral data) and data generated about them through automated processing under consent.
Provide a download endpoint with a time-limited signed URL:
async function generatePortabilityExport(userId: string): Promise<string> {
const data = await collectUserData(userId);
const json = JSON.stringify(data, null, 2);
// Upload to S3 with short expiry
const key = `dsr-exports/${userId}/${Date.now()}.json`;
await s3.putObject({
Bucket: process.env.DSR_EXPORT_BUCKET,
Key: key,
Body: json,
ContentType: 'application/json',
ServerSideEncryption: 'AES256'
});
// Generate presigned URL valid for 24 hours
return s3.getSignedUrlPromise('getObject', {
Bucket: process.env.DSR_EXPORT_BUCKET,
Key: key,
Expires: 86400
});
}
Timeline and Audit Trail
Every action taken on a DSR must be logged with timestamp, actor, and result. If you cannot complete within 30 days, you must notify the data subject within the first 30 days that an extension (of up to two more months) is being taken, and explain why.
Build a dashboard for your DPO or privacy team that shows:
- All open DSRs with days remaining
- Verification status
- Processing status across each data store
- Completed erasures with confirmation
This dashboard is also what you show supervisory authorities during an audit. A well-maintained audit log is often the difference between a warning and a substantial fine.