CentercodePorygon
DittoPorygonAPI

Abuse Prevention

Multi-layer bot detection and spam protection for public forms

Safe

Overview

Invisible, multi-layer defense against bots and spam on public forms. Legitimate users see nothing; suspicious submissions trigger escalating challenges.

What it is

5-layer defense: honeypot, behavioral signals, timing validation, fingerprinting, and Cloudflare Turnstile CAPTCHA.

Why we use it

Public forms (surveys, feedback) need spam protection without hurting legitimate user experience.

When to use

Any form accepting anonymous or public submissions. Skip for authenticated-only forms.

Key Features

  • Invisible to legitimate users - zero UX impact
  • Risk scoring (0-100) based on behavioral signals
  • Escalating challenges - CAPTCHA only when suspicious
  • IP and fingerprint tracking to catch repeat abusers

Architecture

Defense Layers

How the 5 layers work together.

// 5-Layer Defense System (all invisible to legitimate users)

┌─────────────────────────────────────────────────────────────┐
│  CLIENT (Zero UX Impact)                                    │
├─────────────────────────────────────────────────────────────┤
│  1. Honeypot      - Hidden field, bots fill it             │
│  2. Behavioral    - Mouse, keyboard, scroll tracking        │
│  3. Timing        - Form load → submit duration             │
│  4. Fingerprint   - Hashed browser characteristics          │
│  5. Turnstile     - Cloudflare CAPTCHA (escalation only)   │
└─────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│  SERVER (Risk Scoring)                                      │
├─────────────────────────────────────────────────────────────┤
│  Score 0-100 based on signals                               │
│  < 30: ALLOW    30-50: LOG    50-70: CAPTCHA    100: BLOCK │
└─────────────────────────────────────────────────────────────┘

Thresholds: Score < 30: Allow | 30-50: Allow + Log | 50-70: Require CAPTCHA | >= 100: Block

Quick Start

Client-Side Integration

Add honeypot and behavioral tracking to your form.

import {
  HoneypotField,
  useAbuseSignals,
  getFingerprintCached,
} from '@/lib/abuse-prevention';

function PublicForm() {
  const honeypotRef = useRef<HTMLInputElement>(null);
  const { getSignals } = useAbuseSignals();

  const handleSubmit = async (data: FormData) => {
    // Check honeypot - bots fill hidden fields
    if (honeypotRef.current?.value) return; // Silent fail

    // Collect behavioral signals
    const signals = getSignals();
    const fingerprint = getFingerprintCached(); // Synchronous

    await fetch('/api/v1/public/forms/submit', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'x-abuse-signals': JSON.stringify({
          ...signals,
          fingerprintHash: fingerprint,
        }),
      },
      body: JSON.stringify(data),
    });
  };

  return (
    <form onSubmit={handleSubmit}>
      <HoneypotField inputRef={honeypotRef} />
      {/* Form fields... */}
    </form>
  );
}

Patterns

Server-Side Risk Assessment

Assess risk and block suspicious submissions.

import { assessSubmissionRisk } from '@/features/anonymous-sessions';
import type { AbuseSignals } from '@/features/anonymous-sessions';

// In API route handler
const signalsHeader = request.headers.get('x-abuse-signals');
let signals: AbuseSignals | null = null;

if (signalsHeader) {
  try {
    signals = JSON.parse(signalsHeader);
  } catch {
    // Ignore invalid signals - other layers still protect
  }
}

// Assess risk for anonymous submissions
if (signals && sessionToken) {
  const assessment = await assessSubmissionRisk(
    sessionToken,
    formId,  // The form/resource being submitted to
    signals
  );

  if (!assessment.allowed) {
    logger.warn('Submission blocked by risk assessment', {
      score: assessment.score,
      reasons: assessment.reasons,
    });
    throw new AuthorizationError('Submission blocked');
  }

  if (assessment.requireChallenge) {
    return NextResponse.json(
      { ok: false, error: { code: 'CHALLENGE_REQUIRED' } },
      { status: 428 }
    );
  }
}

Turnstile Challenge (Escalation)

Show CAPTCHA only when risk score exceeds threshold.

import { Turnstile, verifyTurnstileToken } from '@/lib/abuse-prevention';

// Client: Show Turnstile when server returns 428
function FormWithChallenge() {
  const [showChallenge, setShowChallenge] = useState(false);
  const [turnstileToken, setTurnstileToken] = useState<string | null>(null);

  const handleSubmit = async (data: FormData) => {
    const response = await fetch('/api/submit', { ... });

    if (response.status === 428) {
      setShowChallenge(true);
      return;
    }
  };

  return (
    <form>
      {/* Form fields */}
      {showChallenge && (
        <Turnstile
          siteKey={process.env.NEXT_PUBLIC_TURNSTILE_SITE_KEY!}
          onSuccess={(token) => setTurnstileToken(token)}
          appearance="interaction-only"
        />
      )}
    </form>
  );
}

// Server: Verify token
if (body.turnstileToken) {
  const valid = await verifyTurnstileToken(
    body.turnstileToken,
    process.env.TURNSTILE_SECRET_KEY!
  );
  if (!valid) {
    throw new AuthorizationError('Verification failed');
  }
}

Note: Turnstile requires NEXT_PUBLIC_TURNSTILE_SITE_KEY and TURNSTILE_SECRET_KEY. Without these, the other 4 layers still protect your forms.

Watch Out

Missing abuse prevention on public forms

Don't

// No abuse prevention - vulnerable to spam
export async function POST(request: Request) {
  const body = await request.json();

  // Directly process submission - no validation!
  await createSubmission(body);

  return NextResponse.json({ ok: true });
}

Do

// Multi-layer abuse prevention
export async function POST(request: Request) {
  const signalsHeader = request.headers.get('x-abuse-signals');
  const signals = signalsHeader ? JSON.parse(signalsHeader) : null;

  if (signals && sessionToken) {
    const assessment = await assessSubmissionRisk(
      sessionToken, resourceId, signals
    );

    if (!assessment.allowed) {
      throw new AuthorizationError('Blocked');
    }

    if (assessment.requireChallenge && !body.turnstileToken) {
      return NextResponse.json({ error: 'CHALLENGE_REQUIRED' }, { status: 428 });
    }
  }

  await createSubmission(body);
  return NextResponse.json({ ok: true });
}

Skipping honeypot (catches naive bots for free)

Don't

// Missing honeypot - naive bots get through
function PublicForm() {
  return (
    <form onSubmit={handleSubmit}>
      <input name="email" />
      <input name="message" />
      <button type="submit">Submit</button>
    </form>
  );
}

Do

// Honeypot catches naive bots (zero UX cost)
function PublicForm() {
  const honeypotRef = useRef<HTMLInputElement>(null);

  const handleSubmit = async () => {
    if (honeypotRef.current?.value) return; // Silent fail
    // Process form...
  };

  return (
    <form onSubmit={handleSubmit}>
      <HoneypotField inputRef={honeypotRef} />
      <input name="email" />
      <input name="message" />
      <button type="submit">Submit</button>
    </form>
  );
}
  • Sending signals in request body instead of x-abuse-signals header
  • Showing Turnstile to everyone instead of only when requireChallenge is true
  • Forgetting to check honeypot value before processing form

Related

Rate Limiting

Request throttling

Authentication

Session management

Security Headers

CORS and CSP