new row violates row-level security policy for table "objects"
appears when:When a private bucket has no INSERT policy on storage.objects, or the policy predicate does not match the upload path
Supabase Storage upload 403
Storage is just a table. Every upload writes a row to storage.objects, which is RLS-protected. No INSERT policy = no uploads.
storage.objects table. That table has RLS enabled by Supabase default. Without a permissive INSERT policy for the authenticated role, every upload returns 403 with new row violates row-level security policy. Add a policy FOR INSERT TO authenticated WITH CHECK (bucket_id = 'x' AND ...).Quick fix for Supabase Storage upload 403
01-- Allow authenticated users to upload into their own folder02create policy "users upload own folder"03on storage.objects04for insert05to authenticated06with check (07 bucket_id = 'user-uploads'08 and (storage.foldername(name))[1] = auth.uid()::text09);10 11-- Allow them to read their own files back12create policy "users read own folder"13on storage.objects14for select15to authenticated16using (17 bucket_id = 'user-uploads'18 and (storage.foldername(name))[1] = auth.uid()::text19);20 21-- Client upload path must match: `${userId}/filename.png`Deeper fixes when the quick fix fails
01 · Server-side signed upload URLs (safest for user-generated content)
01// app/api/upload-url/route.ts — server creates a signed URL02import { createClient } from "@supabase/supabase-js";03import { NextResponse } from "next/server";04 05const admin = createClient(06 process.env.NEXT_PUBLIC_SUPABASE_URL!,07 process.env.SUPABASE_SERVICE_ROLE_KEY! // server-only08);09 10export async function POST(req: Request) {11 const { userId, filename } = await req.json();12 const path = `${userId}/${Date.now()}-${filename}`;13 14 const { data, error } = await admin.storage15 .from("user-uploads")16 .createSignedUploadUrl(path);17 18 if (error) return NextResponse.json({ error: error.message }, { status: 500 });19 return NextResponse.json(data); // { signedUrl, token, path }20}02 · Allow public read while keeping uploads authed
01-- Public read on a specific bucket without exposing uploads02create policy "public read uploads"03on storage.objects04for select05to public06using (bucket_id = 'public-assets');07 08-- Still require auth for writes09create policy "authed insert public assets"10on storage.objects11for insert12to authenticated13with check (bucket_id = 'public-assets');03 · pgTAP test: cross-user upload must fail
01begin;02select plan(1);03 04set local role authenticated;05set local "request.jwt.claims" = '{"sub":"user-a-uuid"}';06 07select throws_ok(08 $$insert into storage.objects (bucket_id, name, owner)09 values ('user-uploads', 'user-b-uuid/file.png', 'user-a-uuid')$$,10 '42501',11 'user-a cannot upload into user-b folder'12);13 14select * from finish();15rollback;Why AI-built apps hit Supabase Storage upload 403
Supabase Storage is not a separate service. It is a set of HTTP endpoints that write to regular Postgres tables: storage.buckets for bucket config and storage.objects for every file. Both tables have Row Level Security enabled by default. Every upload is really an INSERT into storage.objects — and if no INSERT policy allows the authenticated role, the insert is denied with the same 42501 error as any other RLS violation. The HTTP layer surfaces this as a 403.
AI builders trip on this because the Supabase Storage UI shows a bucket visibility toggle (public/private) and the model assumes that toggle controls uploads. It does not. The toggle only affects anonymous read access via storage/v1/object/public/... URLs. Upload permissions, authed read permissions, and delete permissions all live in RLS policies on storage.objects. A Lovable scaffold will mark a bucket public to fix a display issue and never touch the policies — then wonder why uploads 403.
The second common failure is predicate mismatch. Supabase docs suggest policies like (storage.foldername(name))[1] = auth.uid()::text, which requires the upload path to be {userId}/file.png. If the client uploads to a flat namespace like avatars/file.png, the first folder segment is avatars, which never equals the user's UUID, so the predicate is false and the upload 403s. The policy is written, the bucket is configured — but the shapes do not match.
The third cause is auth state. auth.uid() returns null whenever the Postgres request is made with the anon key and no user JWT. If your client was initialized with only the anon key and never called auth.setSession(), the upload runs as anon, not as the logged-in user, and the policy scoped to authenticated does not apply.
Supabase Storage upload 403 by AI builder
How often each AI builder ships this error and the pattern that produces it.
| Builder | Frequency | Pattern |
|---|---|---|
| Lovable | Every Storage scaffold | Marks bucket public, never writes storage.objects policies |
| Bolt.new | Common | Flat upload paths that do not match per-user folder policies |
| Cursor | Common | Uses anon key without setSession; auth.uid() returns null |
| Base44 | Sometimes | Uploads from server with service_role — works in dev, ignored RLS entirely |
| Replit Agent | Rare | Forgets SELECT policy; uploads succeed but downloads 403 |
Related errors we fix
Stop Supabase Storage upload 403 recurring in AI-built apps
- →Use a per-user folder path convention so policies can key on (storage.foldername(name))[1].
- →Always pair an INSERT policy with a matching SELECT policy on storage.objects.
- →For user-generated content, generate signed upload URLs server-side with service_role.
- →Test uploads as anon, authenticated user A, and authenticated user B — cross-user must fail.
- →Never mark a bucket public to fix an authed upload issue — that only changes anon read rules.
Still stuck with Supabase Storage upload 403?
Supabase Storage upload 403 questions
Why does Supabase Storage return 403 even when I am logged in?+
What is the difference between bucket policies and storage.objects policies?+
My policy uses auth.uid() but uploads still 403 — why?+
Can I upload with the service_role key to bypass RLS?+
How much does a full Supabase Storage audit cost?+
Ship the fix. Keep the fix.
Emergency Triage restores service in 48 hours. Break the Fix Loop rebuilds CI so this error cannot ship again.
Hyder Shah leads Afterbuild Labs, shipping production rescues for apps built in Lovable, Bolt.new, Cursor, Replit, v0, and Base44. our rescue methodology.