afterbuild/ops
ERR-985/Supabase · Storage
ERR-985
new row violates row-level security policy for table "objects"

appears when:When a private bucket has no INSERT policy on storage.objects, or the policy predicate does not match the upload path

Supabase Storage upload 403

Storage is just a table. Every upload writes a row to storage.objects, which is RLS-protected. No INSERT policy = no uploads.

Last updated 17 April 2026 · 7 min read · By Hyder Shah
Direct answer
Storage uploads write rows to the storage.objects table. That table has RLS enabled by Supabase default. Without a permissive INSERT policy for the authenticated role, every upload returns 403 with new row violates row-level security policy. Add a policy FOR INSERT TO authenticated WITH CHECK (bucket_id = 'x' AND ...).

Quick fix for Supabase Storage upload 403

supabase/migrations/storage_policies.sql
sql
01-- Allow authenticated users to upload into their own folder02create policy "users upload own folder"03on storage.objects04for insert05to authenticated06with check (07  bucket_id = 'user-uploads'08  and (storage.foldername(name))[1] = auth.uid()::text09);10 11-- Allow them to read their own files back12create policy "users read own folder"13on storage.objects14for select15to authenticated16using (17  bucket_id = 'user-uploads'18  and (storage.foldername(name))[1] = auth.uid()::text19);20 21-- Client upload path must match: `${userId}/filename.png`
Per-user folder pattern — upload to `${userId}/file.png`, policy scopes to first path segment

Deeper fixes when the quick fix fails

01 · Server-side signed upload URLs (safest for user-generated content)

app/api/upload-url/route.ts
typescript
01// app/api/upload-url/route.ts — server creates a signed URL02import { createClient } from "@supabase/supabase-js";03import { NextResponse } from "next/server";04 05const admin = createClient(06  process.env.NEXT_PUBLIC_SUPABASE_URL!,07  process.env.SUPABASE_SERVICE_ROLE_KEY! // server-only08);09 10export async function POST(req: Request) {11  const { userId, filename } = await req.json();12  const path = `${userId}/${Date.now()}-${filename}`;13 14  const { data, error } = await admin.storage15    .from("user-uploads")16    .createSignedUploadUrl(path);17 18  if (error) return NextResponse.json({ error: error.message }, { status: 500 });19  return NextResponse.json(data); // { signedUrl, token, path }20}
Server uses service_role to generate single-use upload URL; client never sees the key

02 · Allow public read while keeping uploads authed

supabase/migrations/public_read_authed_write.sql
sql
01-- Public read on a specific bucket without exposing uploads02create policy "public read uploads"03on storage.objects04for select05to public06using (bucket_id = 'public-assets');07 08-- Still require auth for writes09create policy "authed insert public assets"10on storage.objects11for insert12to authenticated13with check (bucket_id = 'public-assets');
Use when you want anyone to view files but only logged-in users to upload

03 · pgTAP test: cross-user upload must fail

tests/storage_rls.sql
sql
01begin;02select plan(1);03 04set local role authenticated;05set local "request.jwt.claims" = '{"sub":"user-a-uuid"}';06 07select throws_ok(08  $$insert into storage.objects (bucket_id, name, owner)09    values ('user-uploads', 'user-b-uuid/file.png', 'user-a-uuid')$$,10  '42501',11  'user-a cannot upload into user-b folder'12);13 14select * from finish();15rollback;
CI test — policy regression catches the day someone removes the path check

Why AI-built apps hit Supabase Storage upload 403

Supabase Storage is not a separate service. It is a set of HTTP endpoints that write to regular Postgres tables: storage.buckets for bucket config and storage.objects for every file. Both tables have Row Level Security enabled by default. Every upload is really an INSERT into storage.objects — and if no INSERT policy allows the authenticated role, the insert is denied with the same 42501 error as any other RLS violation. The HTTP layer surfaces this as a 403.

AI builders trip on this because the Supabase Storage UI shows a bucket visibility toggle (public/private) and the model assumes that toggle controls uploads. It does not. The toggle only affects anonymous read access via storage/v1/object/public/... URLs. Upload permissions, authed read permissions, and delete permissions all live in RLS policies on storage.objects. A Lovable scaffold will mark a bucket public to fix a display issue and never touch the policies — then wonder why uploads 403.

The second common failure is predicate mismatch. Supabase docs suggest policies like (storage.foldername(name))[1] = auth.uid()::text, which requires the upload path to be {userId}/file.png. If the client uploads to a flat namespace like avatars/file.png, the first folder segment is avatars, which never equals the user's UUID, so the predicate is false and the upload 403s. The policy is written, the bucket is configured — but the shapes do not match.

The third cause is auth state. auth.uid() returns null whenever the Postgres request is made with the anon key and no user JWT. If your client was initialized with only the anon key and never called auth.setSession(), the upload runs as anon, not as the logged-in user, and the policy scoped to authenticated does not apply.

Supabase Storage upload 403 by AI builder

How often each AI builder ships this error and the pattern that produces it.

AI builder × Supabase Storage upload 403
BuilderFrequencyPattern
LovableEvery Storage scaffoldMarks bucket public, never writes storage.objects policies
Bolt.newCommonFlat upload paths that do not match per-user folder policies
CursorCommonUses anon key without setSession; auth.uid() returns null
Base44SometimesUploads from server with service_role — works in dev, ignored RLS entirely
Replit AgentRareForgets SELECT policy; uploads succeed but downloads 403

Related errors we fix

Stop Supabase Storage upload 403 recurring in AI-built apps

Still stuck with Supabase Storage upload 403?

Emergency triage · $299 · 48h turnaround
We restore service and write the root-cause report.
start the triage →

Supabase Storage upload 403 questions

Why does Supabase Storage return 403 even when I am logged in?+
Supabase Storage is backed by the storage.objects table, which has Row Level Security. A logged-in user is still subject to RLS policies on that table. If the bucket is private and no INSERT policy exists on storage.objects for the authenticated role, the upload is denied with 403 and the message 'new row violates row-level security policy'. Fix by adding an INSERT policy scoped to authenticated.
What is the difference between bucket policies and storage.objects policies?+
Bucket-level visibility (public vs private) only controls anonymous read access via the public URL. Everything else — uploads, downloads for authed users, deletes — is governed by RLS policies on the storage.objects table. Making a bucket public does not allow anon uploads, and making a bucket private does not block authed uploads if storage.objects has permissive policies. Write policies on storage.objects explicitly.
My policy uses auth.uid() but uploads still 403 — why?+
Two common causes. First, auth.uid() returns null when the client request is not signing with a valid user JWT. Check the supabase-js client was initialized with the session token, not just the anon key. Second, the policy predicate references a path segment that your upload code does not actually use — for example, policy expects (storage.foldername(name))[1] = auth.uid()::text but your upload path is just user-file.png with no folder. Match the policy shape to the upload path shape.
Can I upload with the service_role key to bypass RLS?+
Yes, but only from the server. The service_role key bypasses all RLS. Never expose it to the browser. A common pattern is: client requests a signed upload URL from your API, the API uses service_role to generate the URL, the client uploads directly to that URL. This keeps credentials server-side while allowing direct browser-to-storage transfers. Still write RLS policies as defense in depth.
How much does a full Supabase Storage audit cost?+
Our Security Audit covers Storage RLS alongside table RLS. For a project with under 10 buckets, the audit including policy writes, test fixture uploads per role, and a pgTAP suite takes about 2 hours. Fixed-fee at $499. Multi-tenant setups with organization-based access add 1 hour. The deliverable includes policies, a signed-URL helper, and a CI test that asserts cross-tenant uploads fail.
Next step

Ship the fix. Keep the fix.

Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.

About the author

Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.

Sources