Online Harms Submission.
Submission to the Education & Workforce Select Committee
Inquiry: Harm young New Zealanders encounter online, and the roles that Government, business, and society should play
Submitter: PILLAR (Protecting Individual Life, Liberty, and Rights)
Date: 06/10/2025
Executive Summary
New Zealanders want children to be safer online. PILLAR shares that aim — but how we pursue it matters. In a liberal democracy, freedom of expression and privacy are foundations, not luxuries. Limits on those rights must be demonstrably justified and minimally impairing, and any approach must respect that parents hold the primary role in guiding children, consistent with their evolving capacities.
The Committee faces a practical challenge at the very starting line: a narrow, durable definition of “social media” is elusive, because the same interactive features appear across messaging apps, gaming platforms, video sites, and even news-comment sections. Banning one venue simply pushes activity to substitutes. On top of that sits an unavoidable trilemma: any age-restriction scheme tends to suffer at least one of (i) easy workarounds by savvy teens, (ii) heavy friction for adults, or (iii) privacy and pseudonymity loss through ID capture and retention.
For those reasons, the least-intrusive lever is to work where the ecosystem already has control points — the device and app-store layer — with parental approval and sane defaults. That won’t eliminate all exposure (nothing will), but it centres families without building universal ID checkpoints.
Accordingly, we recommend five practical options:
Parents-in-the-Loop by default (device/app-store/ISP);
A Digital Resilience Curriculum plus parent micro-credentials;
A single front door for redress with StopNCII integration;
Scoped, privacy-preserving age assurance for adult-only sites (not general social media); and
An illegal-harms enforcement sprint paired with lean platform transparency.
The implementation ethos is simple: regulate processes and accountability — not lawful speech — and measure outcomes that matter (time to redress, uptake of controls, help-seeking) rather than crude takedown totals.
First principles
Freedom as the baseline. Section 14 of the New Zealand Bill of Rights Act (NZBORA) protects the right to seek, receive, and impart information and opinions of any kind. Section 5 requires that any limit be demonstrably justified in a free and democratic society — necessary, proportionate, and the least-intrusive effective option. Internationally, ICCPR Article 19 and the UN Human Rights Committee’s General Comment No. 34 describe free expression as a foundation of democratic society. For children, CRC Article 5 affirms the rights and duties of parents to provide direction and guidance consistent with children’s evolving capacities.
Scope where the law is strongest. The state has an unquestioned role in combating illegal content and conduct — child sexual-exploitation material, grooming, credible threats, and serious harassment. By contrast, attempts to police broad categories of “legal-but-harmful” speech inevitably invite subjectivity and over-removal, especially when platforms are incentivised to “play it safe” by deleting borderline but lawful content.
Privacy, pseudonymity, and due process. General identity mandates and persistent age records create data-security and discrimination risks, and they chill lawful participation under pseudonyms — a long-standing protection for vulnerable speakers. Any regime must feature fair process, transparency, and effective appeal rights.
Proportionality in practice. To help the Committee stress-test proposals, we propose a practical checklist:
Define the specific harm and cohort precisely.
Demonstrate a rational connection between the measure and the harm (not a proxy).
Choose the least-intrusive effective tool, favouring parent-centred and privacy-preserving approaches.
Weigh privacy, equity, and evasion risks.
Pre-commit to metrics, public reporting, and sunset/review dates.
Taken together, these principles do not minimise the reality of online risks. Rather, they discipline the response — keeping us focused on illegal harms, parental agency, and capability-building instead of broad prior restraint on lawful speech.
The definition problem
Well-drafted law starts with clear definitions. Here, that’s harder than it sounds. If Parliament writes rules for a narrow set of brand-name “social media” platforms, features migrate; if it defines by function (feeds, messaging, user-generated content), many benign services are swept in. Gaming platforms with chat, video platforms with logged-out feeds, and news sites with open comments all present edge cases. Meanwhile, logged-out viewing, age mis-statement, shared credentials, and VPNs provide ready substitution paths no matter how tightly the scope is drawn.
On top of the definition challenge sits the age-restriction trilemma. Policy can push down on one corner, but another pops up:
• Evasion: minors can misstate birthdates, borrow accounts, browse logged-out, or use VPNs.
• Adult friction: repeated ID checks, selfie/video verification, extra latency, and lockouts.
• Privacy loss: centralised identity records, cross-service linkage, and profiling risks.
Recognising those constraints isn’t fatalism; it’s good engineering. It argues for parent-centred tools and narrowly targeted measures, rather than sweeping duties over lawful content that are easy to evade and hard to justify.
International comparators
United Kingdom. The Online Safety Act imposes expansive “systems and processes” duties, including children’s safety codes and mandatory age-assurance for commercial pornography. Implementation is proving complex: compliance friction, risks of over-blocking lawful content, and predictable circumvention via VPNs and logged-out consumption. Consultations on potential technology-notice powers have also raised concerns about scanning and encryption.
Canada. The federal omnibus Online Harms Act failed to pass; a separate Senate track continues to pursue pornography age-gating (with potential ISP blocking) that raises persistent privacy and scope concerns. The broader political lesson is that sweeping frameworks stall, while narrower, problem-specific tools stand a better chance.
Australia. A national under-16 social-media minimum age (from 10 December 2025) relies on platforms taking “reasonable steps” with “minimally invasive” age-assurance. The Government’s technology trials confirm feasibility alongside accuracy, bias, and usability trade-offs; critically, logged-out browsing remains a material loophole.
These experiences offer a common moral: design to the real risk surface. Otherwise, we trade away rights, incur heavy costs, and still see behaviour displaced, not reduced.
Recommendations
Our approach: target illegal harms and practical supports, centre parents, and prefer least-intrusive, privacy-preserving measures over broad speech controls. Each recommendation is implementable within 12 months, measurable, and reversible.
Recommendation 1 — Parents-in-the-Loop by Default
Purpose: Make parental supervision the default on devices, app stores, and home networks — without building central ID systems.
Core deliverables (12 months):
• Non-statutory NZ Family Setup Code (default family-linking; parental approval for under-age installs; safe search on).
• ISP/telco sign-up prompt for one-click family filters.
• Parent Digital Toolkit (one-page guides + 2–3 min videos) and community setup nights.
Governance: MBIE lead; partners: MoE, DIA, Netsafe, Privacy Commissioner, platforms/ISPs.
KPIs (quarterly): % child devices with family-linking; % under-age installs routed via parental approval; parent-helpline answer time.
Safeguards & risks: No central ID; data-minimisation and deletion; address digital-divide via in-person clinics; monitor evasion and pair with education.
Recommendation 2 — Digital Resilience Curriculum + Parent Micro-credentials
Purpose: Give children and caregivers practical skills to prevent and respond to online harms.
Core deliverables (12 months):
• Age-staged curriculum modules (privacy, scams, consent, attention/sleep).
• Teacher PLD (short, practical) and 2–3 hour parent micro-credentials.
• Home–school bridge: termly digital check-ins and optional Family Tech Pact.
Governance: MoE lead; partners: Netsafe, evaluators, iwi/Pasifika providers.
KPIs (semi-annual): % schools delivering modules; teacher/parent confidence uplift (pre/post); change in help-seeking rates.
Safeguards & risks: Content-neutral design; inclusive materials; integrate to avoid curriculum overload.
Recommendation 3 — One Front Door for Redress + StopNCII Integration
Purpose: Provide fast, trauma-informed resolution when serious harms occur.
Core deliverables (6–12 months):
• Single hotline/portal routing to Netsafe triage and Police escalation.
• StopNCII hash workflow so survivors can block re-uploads without uploading images.
• Published Service Charter with target response times and quarterly dashboard.
Governance: Operational lead Netsafe; Police for enforcement; DIA coordinates standards; advisory from Privacy Commissioner and victim advocates.
KPIs (quarterly): Time-to-first-contact for urgent cases; median time-to-takedown; victim satisfaction.
Safeguards & risks: Consent-driven sharing; strict access controls and audits; surge staffing plan and public escalation for platform non-compliance.
Why stop at three? These three measures cover the critical path: (1) set parental-control defaults where they matter most, (2) build capability in classrooms and homes, and (3) deliver rapid redress when harm occurs. They avoid universal ID mandates and broad speech controls, yet give the Committee concrete levers with visible results inside a year.
Conclusion
New Zealand can protect children effectively without building a speech-policing state. A parent-centred, capability-building, targeted-enforcement strategy — grounded in NZBORA proportionality and robust privacy protections — will do more good, with far fewer unintended costs, than universal identity checks or broad prior restraint.
The five options above are practical, measurable, and implementable within 6–12 months: they provide clear deliverables, KPIs, and governance pathways so the Committee can track progress and hold actors to account. They put families first, tackle the genuinely illegal harms that warrant state action, demand useful transparency from platforms, and preserve the constitutional foundations on which our democracy depends.
Crucially, this is a reversible, evidence-led approach: each measure can be independently evaluated, scaled up, or withdrawn at pre-committed review points if results or rights impacts prove unacceptable. In short, we can reduce real risk to young people quickly while keeping intact the freedoms and privacy that make a free society resilient.