Files
agenciapsilmno/database-novo/schema/03_functions/storage.sql
T
Leonardo 2644e60bb6 CRM WhatsApp Grupo 3 completo + Marco A/B (Asaas) + admin SaaS + refactors polimórficos
Sessão 11+: fechamento do CRM de WhatsApp com dois providers (Evolution/Twilio),
sistema de créditos com Asaas/PIX, polimorfismo de telefones/emails, e integração
admin SaaS no /saas/addons existente.

═══════════════════════════════════════════════════════════════════════════
GRUPO 3 — WORKFLOW / CRM (completo)
═══════════════════════════════════════════════════════════════════════════

3.1 Tags · migration conversation_tags + seed de 5 system tags · composable
useConversationTags.js · popover + pills no drawer e nos cards do Kanban.

3.2 Atribuição de conversa a terapeuta · migration 20260421000012 com PK
(tenant_id, thread_key), UPSERT, RLS que valida assignee como membro ativo
do mesmo tenant · view conversation_threads expandida com assigned_to +
assigned_at · composable useConversationAssignment.js · drawer com Select
filtrável + botão "Assumir" · inbox com filtro aside (Todas/Minhas/Não
atribuídas) e chip do responsável em cada card (destaca "Eu" em azul).

3.3 Notas internas · migration conversation_notes · composable + seção
colapsável no drawer · apenas o criador pode editar/apagar (RLS).

3.5 Converter desconhecido em paciente · botão + dialog quick-cadastro ·
"Vincular existente" com Select filter de até 500 pacientes · cria
telefone WhatsApp (vinculado) via upsertWhatsappForExisting.

3.6 Histórico de conversa no prontuário · nova aba "Conversas" em
PatientProntuario.vue · PatientConversationsTab.vue com stats (total /
recebidas / enviadas / primeira / última), SelectButton de filtro, timeline
com bolhas por direção, mídia inline (imagem/áudio/vídeo/doc via signed
URL), indicadores ✓ ✓✓ de delivery, botão "Abrir no CRM".

═══════════════════════════════════════════════════════════════════════════
MARCO A — UNIFICAÇÃO WHATSAPP (dois providers mutuamente exclusivos)
═══════════════════════════════════════════════════════════════════════════

- Página chooser ConfiguracoesWhatsappChooserPage.vue com 2 cards (Pessoal/
  Oficial), deactivate via edge function deactivate-notification-channel
- send-whatsapp-message refatorada com roteamento por provider; Twilio deduz
  1 crédito antes do envio e refunda em falha
- Paridade Twilio (novo): módulo compartilhado supabase/functions/_shared/
  whatsapp-hooks.ts com lógica provider-agnóstica (opt-in, opt-out, auto-
  reply, schedule helpers em TZ São Paulo, makeTwilioCreditedSendFn que
  envolve envio em dedução atômica + rollback). Consumido por Evolution E
  Twilio inbound. Evolution refatorado (~290 linhas duplicadas removidas).
- Bucket privado whatsapp-media · decrypt via Evolution getBase64From
  MediaMessage · upload com path tenant/yyyy/mm · signed URLs on-demand

═══════════════════════════════════════════════════════════════════════════
MARCO B — SISTEMA DE CRÉDITOS WHATSAPP + ASAAS
═══════════════════════════════════════════════════════════════════════════

Banco:
- Migration 20260421000007_whatsapp_credits (4 tabelas: balance,
  transactions, packages, purchases) + RPCs add_whatsapp_credits e
  deduct_whatsapp_credits (atômicas com SELECT FOR UPDATE)
- Migration 20260421000013_tenant_cpf_cnpj (coluna em tenants com CHECK
  de 11 ou 14 dígitos)

Edge functions:
- create-whatsapp-credit-charge · Asaas v3 (sandbox + prod) · PIX com
  QR code · getOrCreateAsaasCustomer patcha customer existente com CPF
  quando está faltando
- asaas-webhook · recebe PAYMENT_RECEIVED/CONFIRMED e credita balance

Frontend (tenant):
- Página /configuracoes/creditos-whatsapp com saldo + loja + histórico
- Dialog de confirmação com CPF/CNPJ (validação via isValidCPF/CNPJ de
  utils/validators, formatação on-blur, pré-fill de tenants.cpf_cnpj,
  persiste no primeiro uso) · fallback sandbox 24971563792 REMOVIDO
- Composable useWhatsappCredits extrai erros amigáveis via
  error.context.json()

Frontend (SaaS admin):
- Em /saas/addons (reuso do pattern existente, não criou página paralela):
  - Aba 4 "Pacotes WhatsApp" — CRUD whatsapp_credit_packages com DataTable,
    toggle is_active inline, dialog de edição com validação
  - Aba 5 "Topup WhatsApp" — tenant Select com saldo ao vivo · RPC
    add_whatsapp_credits com p_admin_id = auth.uid() (auditoria) · histórico
    das últimas 20 transações topup/adjustment/refund

═══════════════════════════════════════════════════════════════════════════
GRUPO 2 — AUTOMAÇÃO
═══════════════════════════════════════════════════════════════════════════

2.3 Auto-reply · conversation_autoreply_settings + conversation_autoreply_
log · 3 modos de schedule (agenda das regras semanais, business_hours
custom, custom_window) · cooldown por thread · respeita opt-out · agora
funciona em Evolution E Twilio (hooks compartilhados)

2.4 Lembretes de sessão · conversation_session_reminders_settings +
_logs · edge send-session-reminders (cron) · janelas 24h e 2h antes ·
Twilio deduz crédito com rollback em falha

═══════════════════════════════════════════════════════════════════════════
GRUPO 5 — COMPLIANCE (LGPD Art. 18 §2)
═══════════════════════════════════════════════════════════════════════════

5.2 Opt-out · conversation_optouts + conversation_optout_keywords (10 system
seed + custom por tenant) · detecção por regex word-boundary e normalização
(lowercase + strip acentos + pontuação) · ack automático (deduz crédito em
Twilio) · opt-in via "voltar", "retornar", "reativar", "restart" ·
página /configuracoes/conversas-optouts com CRUD de keywords

═══════════════════════════════════════════════════════════════════════════
REFACTOR POLIMÓRFICO — TELEFONES + EMAILS
═══════════════════════════════════════════════════════════════════════════

- contact_types + contact_phones (entity_type + entity_id) — migration
  20260421000008 · contact_email_types + contact_emails — 20260421000011
- Componentes ContactPhonesEditor.vue e ContactEmailsEditor.vue (add/edit/
  remove com confirm, primary selector, WhatsApp linked badge)
- Composables useContactPhones.js + useContactEmails.js com
  unsetOtherPrimaries() e validação
- Trocado em PatientsCadastroPage.vue e MedicosPage.vue (removidos campos
  legados telefone/telefone_alternativo e email_principal/email_alternativo)
- Migration retroativa v2 (20260421000010) detecta conversation_messages
  e cria/atualiza phone como WhatsApp vinculado

═══════════════════════════════════════════════════════════════════════════
POLIMENTO VISUAL + INFRA
═══════════════════════════════════════════════════════════════════════════

- Skeletons simplificados no dashboard do terapeuta
- Animações fade-up com stagger via [--delay:Xms] (fix specificity sobre
  .dash-card box-shadow transition)
- ConfirmDialog com group="conversation-drawer" (evita montagem duplicada)
- Image preview PrimeVue com botão de download injetado via MutationObserver
  (fetch + blob para funcionar cross-origin)
- Áudio/vídeo com preload="metadata" e controles de velocidade do browser
- friendlySendError() mapeia códigos do edge pra mensagens pt-BR via
  error.context.json()
- Teleport #cfg-page-actions para ações globais de Configurações
- Brotli/Gzip + auto-import Vue/PrimeVue + bundle analyzer
- AppLayout consolidado (removidas duplicatas por área) + RouterPassthrough
- Removido console.trace debug que estava em watch de router e queries
  Supabase (degradava perf pra todos)
- Realtime em conversation_messages via publication supabase_realtime
- Notifier global flutuante com beep + toggle mute (4 camadas: badge +
  sino + popup + browser notification)

═══════════════════════════════════════════════════════════════════════════
MIGRATIONS NOVAS (13)
═══════════════════════════════════════════════════════════════════════════

20260420000001_patient_intake_invite_info_rpc
20260420000002_audit_logs_lgpd
20260420000003_audit_logs_unified_view
20260420000004_lgpd_export_patient_rpc
20260420000005_conversation_messages
20260420000005_search_global_rpc
20260420000006_conv_messages_notifications
20260420000007_notif_channels_saas_admin_insert
20260420000008_conv_messages_realtime
20260420000009_conv_messages_delivery_status
20260421000001_whatsapp_media_bucket
20260421000002_conversation_notes
20260421000003_conversation_tags
20260421000004_conversation_autoreply
20260421000005_conversation_optouts
20260421000006_session_reminders
20260421000007_whatsapp_credits
20260421000008_contact_phones
20260421000009_retroactive_whatsapp_link
20260421000010_retroactive_whatsapp_link_v2
20260421000011_contact_emails
20260421000012_conversation_assignments
20260421000013_tenant_cpf_cnpj

═══════════════════════════════════════════════════════════════════════════
EDGE FUNCTIONS NOVAS / MODIFICADAS
═══════════════════════════════════════════════════════════════════════════

Novas:
- _shared/whatsapp-hooks.ts (módulo compartilhado)
- asaas-webhook
- create-whatsapp-credit-charge
- deactivate-notification-channel
- evolution-webhook-provision
- evolution-whatsapp-inbound
- get-intake-invite-info
- notification-webhook
- send-session-reminders
- send-whatsapp-message
- submit-patient-intake
- twilio-whatsapp-inbound

═══════════════════════════════════════════════════════════════════════════
FRONTEND — RESUMO
═══════════════════════════════════════════════════════════════════════════

Composables novos: useAddonExtrato, useAuditoria, useAutoReplySettings,
useClinicKPIs, useContactEmails, useContactPhones, useConversationAssignment,
useConversationNotes, useConversationOptouts, useConversationTags,
useConversations, useLgpdExport, useSessionReminders, useWhatsappCredits

Stores: conversationDrawerStore

Componentes novos: ConversationDrawer, GlobalInboundNotifier, GlobalSearch,
ContactEmailsEditor, ContactPhonesEditor

Páginas novas: CRMConversasPage, PatientConversationsTab, AddonsExtratoPage,
AuditoriaPage, NotificationsHistoryPage, ConfiguracoesWhatsappChooserPage,
ConfiguracoesConversasAutoreplyPage, ConfiguracoesConversasOptoutsPage,
ConfiguracoesConversasTagsPage, ConfiguracoesCreditosWhatsappPage,
ConfiguracoesLembretesSessaoPage

Utils novos: addonExtratoExport, auditoriaExport, excelExport,
lgpdExportFormats

Páginas existentes alteradas: ClinicDashboard, PatientsCadastroPage,
PatientCadastroDialog, PatientsListPage, MedicosPage, PatientProntuario,
ConfiguracoesWhatsappPage, SaasWhatsappPage, ConfiguracoesRecursosExtrasPage,
ConfiguracoesPage, AgendaTerapeutaPage, AgendaClinicaPage, NotificationItem,
NotificationDrawer, AppLayout, AppTopbar, useMenuBadges,
patientsRepository, SaasAddonsPage (aba 4 + 5 WhatsApp)

Routes: routes.clinic, routes.configs, routes.therapist atualizados
Menus: clinic.menu, therapist.menu, saas.menu atualizados

═══════════════════════════════════════════════════════════════════════════
NOTAS

- Após subir, rodar supabase functions serve --no-verify-jwt
  --env-file supabase/functions/.env pra carregar o módulo _shared
- WHATSAPP_SETUP.md reescrito (~400 linhas) com setup completo dos 3
  providers + troubleshooting + LGPD
- HANDOFF.md atualizado com estado atual e próximos passos

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 07:05:24 -03:00

772 lines
30 KiB
PL/PgSQL

-- Functions: storage
-- Gerado automaticamente em 2026-04-21T23:16:34.950Z
-- Total: 15
CREATE FUNCTION storage.can_insert_object(bucketid text, name text, owner uuid, metadata jsonb) RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO "storage"."objects" ("bucket_id", "name", "owner", "metadata") VALUES (bucketid, name, owner, metadata);
-- hack to rollback the successful insert
RAISE sqlstate 'PT200' using
message = 'ROLLBACK',
detail = 'rollback successful insert';
END
$$;
CREATE FUNCTION storage.enforce_bucket_name_length() RETURNS trigger
LANGUAGE plpgsql
AS $$
begin
if length(new.name) > 100 then
raise exception 'bucket name "%" is too long (% characters). Max is 100.', new.name, length(new.name);
end if;
return new;
end;
$$;
CREATE FUNCTION storage.extension(name text) RETURNS text
LANGUAGE plpgsql
AS $$
DECLARE
_parts text[];
_filename text;
BEGIN
select string_to_array(name, '/') into _parts;
select _parts[array_length(_parts,1)] into _filename;
-- @todo return the last part instead of 2
return reverse(split_part(reverse(_filename), '.', 1));
END
$$;
CREATE FUNCTION storage.filename(name text) RETURNS text
LANGUAGE plpgsql
AS $$
DECLARE
_parts text[];
BEGIN
select string_to_array(name, '/') into _parts;
return _parts[array_length(_parts,1)];
END
$$;
CREATE FUNCTION storage.foldername(name text) RETURNS text[]
LANGUAGE plpgsql
AS $$
DECLARE
_parts text[];
BEGIN
select string_to_array(name, '/') into _parts;
return _parts[1:array_length(_parts,1)-1];
END
$$;
CREATE FUNCTION storage.get_common_prefix(p_key text, p_prefix text, p_delimiter text) RETURNS text
LANGUAGE sql IMMUTABLE
AS $$
SELECT CASE
WHEN position(p_delimiter IN substring(p_key FROM length(p_prefix) + 1)) > 0
THEN left(p_key, length(p_prefix) + position(p_delimiter IN substring(p_key FROM length(p_prefix) + 1)))
ELSE NULL
END;
$$;
CREATE FUNCTION storage.get_size_by_bucket() RETURNS TABLE(size bigint, bucket_id text)
LANGUAGE plpgsql
AS $$
BEGIN
return query
select sum((metadata->>'size')::int) as size, obj.bucket_id
from "storage".objects as obj
group by obj.bucket_id;
END
$$;
CREATE FUNCTION storage.list_multipart_uploads_with_delimiter(bucket_id text, prefix_param text, delimiter_param text, max_keys integer DEFAULT 100, next_key_token text DEFAULT ''::text, next_upload_token text DEFAULT ''::text) RETURNS TABLE(key text, id text, created_at timestamp with time zone)
LANGUAGE plpgsql
AS $_$
BEGIN
RETURN QUERY EXECUTE
'SELECT DISTINCT ON(key COLLATE "C") * from (
SELECT
CASE
WHEN position($2 IN substring(key from length($1) + 1)) > 0 THEN
substring(key from 1 for length($1) + position($2 IN substring(key from length($1) + 1)))
ELSE
key
END AS key, id, created_at
FROM
storage.s3_multipart_uploads
WHERE
bucket_id = $5 AND
key ILIKE $1 || ''%'' AND
CASE
WHEN $4 != '''' AND $6 = '''' THEN
CASE
WHEN position($2 IN substring(key from length($1) + 1)) > 0 THEN
substring(key from 1 for length($1) + position($2 IN substring(key from length($1) + 1))) COLLATE "C" > $4
ELSE
key COLLATE "C" > $4
END
ELSE
true
END AND
CASE
WHEN $6 != '''' THEN
id COLLATE "C" > $6
ELSE
true
END
ORDER BY
key COLLATE "C" ASC, created_at ASC) as e order by key COLLATE "C" LIMIT $3'
USING prefix_param, delimiter_param, max_keys, next_key_token, bucket_id, next_upload_token;
END;
$_$;
CREATE FUNCTION storage.list_objects_with_delimiter(_bucket_id text, prefix_param text, delimiter_param text, max_keys integer DEFAULT 100, start_after text DEFAULT ''::text, next_token text DEFAULT ''::text, sort_order text DEFAULT 'asc'::text) RETURNS TABLE(name text, id uuid, metadata jsonb, updated_at timestamp with time zone, created_at timestamp with time zone, last_accessed_at timestamp with time zone)
LANGUAGE plpgsql STABLE
AS $_$
DECLARE
v_peek_name TEXT;
v_current RECORD;
v_common_prefix TEXT;
-- Configuration
v_is_asc BOOLEAN;
v_prefix TEXT;
v_start TEXT;
v_upper_bound TEXT;
v_file_batch_size INT;
-- Seek state
v_next_seek TEXT;
v_count INT := 0;
-- Dynamic SQL for batch query only
v_batch_query TEXT;
BEGIN
-- ========================================================================
-- INITIALIZATION
-- ========================================================================
v_is_asc := lower(coalesce(sort_order, 'asc')) = 'asc';
v_prefix := coalesce(prefix_param, '');
v_start := CASE WHEN coalesce(next_token, '') <> '' THEN next_token ELSE coalesce(start_after, '') END;
v_file_batch_size := LEAST(GREATEST(max_keys * 2, 100), 1000);
-- Calculate upper bound for prefix filtering (bytewise, using COLLATE "C")
IF v_prefix = '' THEN
v_upper_bound := NULL;
ELSIF right(v_prefix, 1) = delimiter_param THEN
v_upper_bound := left(v_prefix, -1) || chr(ascii(delimiter_param) + 1);
ELSE
v_upper_bound := left(v_prefix, -1) || chr(ascii(right(v_prefix, 1)) + 1);
END IF;
-- Build batch query (dynamic SQL - called infrequently, amortized over many rows)
IF v_is_asc THEN
IF v_upper_bound IS NOT NULL THEN
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND o.name COLLATE "C" >= $2 ' ||
'AND o.name COLLATE "C" < $3 ORDER BY o.name COLLATE "C" ASC LIMIT $4';
ELSE
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND o.name COLLATE "C" >= $2 ' ||
'ORDER BY o.name COLLATE "C" ASC LIMIT $4';
END IF;
ELSE
IF v_upper_bound IS NOT NULL THEN
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND o.name COLLATE "C" < $2 ' ||
'AND o.name COLLATE "C" >= $3 ORDER BY o.name COLLATE "C" DESC LIMIT $4';
ELSE
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND o.name COLLATE "C" < $2 ' ||
'ORDER BY o.name COLLATE "C" DESC LIMIT $4';
END IF;
END IF;
-- ========================================================================
-- SEEK INITIALIZATION: Determine starting position
-- ========================================================================
IF v_start = '' THEN
IF v_is_asc THEN
v_next_seek := v_prefix;
ELSE
-- DESC without cursor: find the last item in range
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_next_seek FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" >= v_prefix AND o.name COLLATE "C" < v_upper_bound
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
ELSIF v_prefix <> '' THEN
SELECT o.name INTO v_next_seek FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" >= v_prefix
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
ELSE
SELECT o.name INTO v_next_seek FROM storage.objects o
WHERE o.bucket_id = _bucket_id
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
END IF;
IF v_next_seek IS NOT NULL THEN
v_next_seek := v_next_seek || delimiter_param;
ELSE
RETURN;
END IF;
END IF;
ELSE
-- Cursor provided: determine if it refers to a folder or leaf
IF EXISTS (
SELECT 1 FROM storage.objects o
WHERE o.bucket_id = _bucket_id
AND o.name COLLATE "C" LIKE v_start || delimiter_param || '%'
LIMIT 1
) THEN
-- Cursor refers to a folder
IF v_is_asc THEN
v_next_seek := v_start || chr(ascii(delimiter_param) + 1);
ELSE
v_next_seek := v_start || delimiter_param;
END IF;
ELSE
-- Cursor refers to a leaf object
IF v_is_asc THEN
v_next_seek := v_start || delimiter_param;
ELSE
v_next_seek := v_start;
END IF;
END IF;
END IF;
-- ========================================================================
-- MAIN LOOP: Hybrid peek-then-batch algorithm
-- Uses STATIC SQL for peek (hot path) and DYNAMIC SQL for batch
-- ========================================================================
LOOP
EXIT WHEN v_count >= max_keys;
-- STEP 1: PEEK using STATIC SQL (plan cached, very fast)
IF v_is_asc THEN
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" >= v_next_seek AND o.name COLLATE "C" < v_upper_bound
ORDER BY o.name COLLATE "C" ASC LIMIT 1;
ELSE
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" >= v_next_seek
ORDER BY o.name COLLATE "C" ASC LIMIT 1;
END IF;
ELSE
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" < v_next_seek AND o.name COLLATE "C" >= v_prefix
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
ELSIF v_prefix <> '' THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" < v_next_seek AND o.name COLLATE "C" >= v_prefix
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
ELSE
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = _bucket_id AND o.name COLLATE "C" < v_next_seek
ORDER BY o.name COLLATE "C" DESC LIMIT 1;
END IF;
END IF;
EXIT WHEN v_peek_name IS NULL;
-- STEP 2: Check if this is a FOLDER or FILE
v_common_prefix := storage.get_common_prefix(v_peek_name, v_prefix, delimiter_param);
IF v_common_prefix IS NOT NULL THEN
-- FOLDER: Emit and skip to next folder (no heap access needed)
name := rtrim(v_common_prefix, delimiter_param);
id := NULL;
updated_at := NULL;
created_at := NULL;
last_accessed_at := NULL;
metadata := NULL;
RETURN NEXT;
v_count := v_count + 1;
-- Advance seek past the folder range
IF v_is_asc THEN
v_next_seek := left(v_common_prefix, -1) || chr(ascii(delimiter_param) + 1);
ELSE
v_next_seek := v_common_prefix;
END IF;
ELSE
-- FILE: Batch fetch using DYNAMIC SQL (overhead amortized over many rows)
-- For ASC: upper_bound is the exclusive upper limit (< condition)
-- For DESC: prefix is the inclusive lower limit (>= condition)
FOR v_current IN EXECUTE v_batch_query USING _bucket_id, v_next_seek,
CASE WHEN v_is_asc THEN COALESCE(v_upper_bound, v_prefix) ELSE v_prefix END, v_file_batch_size
LOOP
v_common_prefix := storage.get_common_prefix(v_current.name, v_prefix, delimiter_param);
IF v_common_prefix IS NOT NULL THEN
-- Hit a folder: exit batch, let peek handle it
v_next_seek := v_current.name;
EXIT;
END IF;
-- Emit file
name := v_current.name;
id := v_current.id;
updated_at := v_current.updated_at;
created_at := v_current.created_at;
last_accessed_at := v_current.last_accessed_at;
metadata := v_current.metadata;
RETURN NEXT;
v_count := v_count + 1;
-- Advance seek past this file
IF v_is_asc THEN
v_next_seek := v_current.name || delimiter_param;
ELSE
v_next_seek := v_current.name;
END IF;
EXIT WHEN v_count >= max_keys;
END LOOP;
END IF;
END LOOP;
END;
$_$;
CREATE FUNCTION storage.operation() RETURNS text
LANGUAGE plpgsql STABLE
AS $$
BEGIN
RETURN current_setting('storage.operation', true);
END;
$$;
CREATE FUNCTION storage.protect_delete() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
-- Check if storage.allow_delete_query is set to 'true'
IF COALESCE(current_setting('storage.allow_delete_query', true), 'false') != 'true' THEN
RAISE EXCEPTION 'Direct deletion from storage tables is not allowed. Use the Storage API instead.'
USING HINT = 'This prevents accidental data loss from orphaned objects.',
ERRCODE = '42501';
END IF;
RETURN NULL;
END;
$$;
CREATE FUNCTION storage.search(prefix text, bucketname text, limits integer DEFAULT 100, levels integer DEFAULT 1, offsets integer DEFAULT 0, search text DEFAULT ''::text, sortcolumn text DEFAULT 'name'::text, sortorder text DEFAULT 'asc'::text) RETURNS TABLE(name text, id uuid, updated_at timestamp with time zone, created_at timestamp with time zone, last_accessed_at timestamp with time zone, metadata jsonb)
LANGUAGE plpgsql STABLE
AS $_$
DECLARE
v_peek_name TEXT;
v_current RECORD;
v_common_prefix TEXT;
v_delimiter CONSTANT TEXT := '/';
-- Configuration
v_limit INT;
v_prefix TEXT;
v_prefix_lower TEXT;
v_is_asc BOOLEAN;
v_order_by TEXT;
v_sort_order TEXT;
v_upper_bound TEXT;
v_file_batch_size INT;
-- Dynamic SQL for batch query only
v_batch_query TEXT;
-- Seek state
v_next_seek TEXT;
v_count INT := 0;
v_skipped INT := 0;
BEGIN
-- ========================================================================
-- INITIALIZATION
-- ========================================================================
v_limit := LEAST(coalesce(limits, 100), 1500);
v_prefix := coalesce(prefix, '') || coalesce(search, '');
v_prefix_lower := lower(v_prefix);
v_is_asc := lower(coalesce(sortorder, 'asc')) = 'asc';
v_file_batch_size := LEAST(GREATEST(v_limit * 2, 100), 1000);
-- Validate sort column
CASE lower(coalesce(sortcolumn, 'name'))
WHEN 'name' THEN v_order_by := 'name';
WHEN 'updated_at' THEN v_order_by := 'updated_at';
WHEN 'created_at' THEN v_order_by := 'created_at';
WHEN 'last_accessed_at' THEN v_order_by := 'last_accessed_at';
ELSE v_order_by := 'name';
END CASE;
v_sort_order := CASE WHEN v_is_asc THEN 'asc' ELSE 'desc' END;
-- ========================================================================
-- NON-NAME SORTING: Use path_tokens approach (unchanged)
-- ========================================================================
IF v_order_by != 'name' THEN
RETURN QUERY EXECUTE format(
$sql$
WITH folders AS (
SELECT path_tokens[$1] AS folder
FROM storage.objects
WHERE objects.name ILIKE $2 || '%%'
AND bucket_id = $3
AND array_length(objects.path_tokens, 1) <> $1
GROUP BY folder
ORDER BY folder %s
)
(SELECT folder AS "name",
NULL::uuid AS id,
NULL::timestamptz AS updated_at,
NULL::timestamptz AS created_at,
NULL::timestamptz AS last_accessed_at,
NULL::jsonb AS metadata FROM folders)
UNION ALL
(SELECT path_tokens[$1] AS "name",
id, updated_at, created_at, last_accessed_at, metadata
FROM storage.objects
WHERE objects.name ILIKE $2 || '%%'
AND bucket_id = $3
AND array_length(objects.path_tokens, 1) = $1
ORDER BY %I %s)
LIMIT $4 OFFSET $5
$sql$, v_sort_order, v_order_by, v_sort_order
) USING levels, v_prefix, bucketname, v_limit, offsets;
RETURN;
END IF;
-- ========================================================================
-- NAME SORTING: Hybrid skip-scan with batch optimization
-- ========================================================================
-- Calculate upper bound for prefix filtering
IF v_prefix_lower = '' THEN
v_upper_bound := NULL;
ELSIF right(v_prefix_lower, 1) = v_delimiter THEN
v_upper_bound := left(v_prefix_lower, -1) || chr(ascii(v_delimiter) + 1);
ELSE
v_upper_bound := left(v_prefix_lower, -1) || chr(ascii(right(v_prefix_lower, 1)) + 1);
END IF;
-- Build batch query (dynamic SQL - called infrequently, amortized over many rows)
IF v_is_asc THEN
IF v_upper_bound IS NOT NULL THEN
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND lower(o.name) COLLATE "C" >= $2 ' ||
'AND lower(o.name) COLLATE "C" < $3 ORDER BY lower(o.name) COLLATE "C" ASC LIMIT $4';
ELSE
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND lower(o.name) COLLATE "C" >= $2 ' ||
'ORDER BY lower(o.name) COLLATE "C" ASC LIMIT $4';
END IF;
ELSE
IF v_upper_bound IS NOT NULL THEN
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND lower(o.name) COLLATE "C" < $2 ' ||
'AND lower(o.name) COLLATE "C" >= $3 ORDER BY lower(o.name) COLLATE "C" DESC LIMIT $4';
ELSE
v_batch_query := 'SELECT o.name, o.id, o.updated_at, o.created_at, o.last_accessed_at, o.metadata ' ||
'FROM storage.objects o WHERE o.bucket_id = $1 AND lower(o.name) COLLATE "C" < $2 ' ||
'ORDER BY lower(o.name) COLLATE "C" DESC LIMIT $4';
END IF;
END IF;
-- Initialize seek position
IF v_is_asc THEN
v_next_seek := v_prefix_lower;
ELSE
-- DESC: find the last item in range first (static SQL)
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" >= v_prefix_lower AND lower(o.name) COLLATE "C" < v_upper_bound
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
ELSIF v_prefix_lower <> '' THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" >= v_prefix_lower
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
ELSE
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
END IF;
IF v_peek_name IS NOT NULL THEN
v_next_seek := lower(v_peek_name) || v_delimiter;
ELSE
RETURN;
END IF;
END IF;
-- ========================================================================
-- MAIN LOOP: Hybrid peek-then-batch algorithm
-- Uses STATIC SQL for peek (hot path) and DYNAMIC SQL for batch
-- ========================================================================
LOOP
EXIT WHEN v_count >= v_limit;
-- STEP 1: PEEK using STATIC SQL (plan cached, very fast)
IF v_is_asc THEN
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" >= v_next_seek AND lower(o.name) COLLATE "C" < v_upper_bound
ORDER BY lower(o.name) COLLATE "C" ASC LIMIT 1;
ELSE
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" >= v_next_seek
ORDER BY lower(o.name) COLLATE "C" ASC LIMIT 1;
END IF;
ELSE
IF v_upper_bound IS NOT NULL THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" < v_next_seek AND lower(o.name) COLLATE "C" >= v_prefix_lower
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
ELSIF v_prefix_lower <> '' THEN
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" < v_next_seek AND lower(o.name) COLLATE "C" >= v_prefix_lower
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
ELSE
SELECT o.name INTO v_peek_name FROM storage.objects o
WHERE o.bucket_id = bucketname AND lower(o.name) COLLATE "C" < v_next_seek
ORDER BY lower(o.name) COLLATE "C" DESC LIMIT 1;
END IF;
END IF;
EXIT WHEN v_peek_name IS NULL;
-- STEP 2: Check if this is a FOLDER or FILE
v_common_prefix := storage.get_common_prefix(lower(v_peek_name), v_prefix_lower, v_delimiter);
IF v_common_prefix IS NOT NULL THEN
-- FOLDER: Handle offset, emit if needed, skip to next folder
IF v_skipped < offsets THEN
v_skipped := v_skipped + 1;
ELSE
name := split_part(rtrim(v_common_prefix, v_delimiter), v_delimiter, levels);
id := NULL;
updated_at := NULL;
created_at := NULL;
last_accessed_at := NULL;
metadata := NULL;
RETURN NEXT;
v_count := v_count + 1;
END IF;
-- Advance seek past the folder range
IF v_is_asc THEN
v_next_seek := lower(left(v_common_prefix, -1)) || chr(ascii(v_delimiter) + 1);
ELSE
v_next_seek := lower(v_common_prefix);
END IF;
ELSE
-- FILE: Batch fetch using DYNAMIC SQL (overhead amortized over many rows)
-- For ASC: upper_bound is the exclusive upper limit (< condition)
-- For DESC: prefix_lower is the inclusive lower limit (>= condition)
FOR v_current IN EXECUTE v_batch_query
USING bucketname, v_next_seek,
CASE WHEN v_is_asc THEN COALESCE(v_upper_bound, v_prefix_lower) ELSE v_prefix_lower END, v_file_batch_size
LOOP
v_common_prefix := storage.get_common_prefix(lower(v_current.name), v_prefix_lower, v_delimiter);
IF v_common_prefix IS NOT NULL THEN
-- Hit a folder: exit batch, let peek handle it
v_next_seek := lower(v_current.name);
EXIT;
END IF;
-- Handle offset skipping
IF v_skipped < offsets THEN
v_skipped := v_skipped + 1;
ELSE
-- Emit file
name := split_part(v_current.name, v_delimiter, levels);
id := v_current.id;
updated_at := v_current.updated_at;
created_at := v_current.created_at;
last_accessed_at := v_current.last_accessed_at;
metadata := v_current.metadata;
RETURN NEXT;
v_count := v_count + 1;
END IF;
-- Advance seek past this file
IF v_is_asc THEN
v_next_seek := lower(v_current.name) || v_delimiter;
ELSE
v_next_seek := lower(v_current.name);
END IF;
EXIT WHEN v_count >= v_limit;
END LOOP;
END IF;
END LOOP;
END;
$_$;
CREATE FUNCTION storage.search_by_timestamp(p_prefix text, p_bucket_id text, p_limit integer, p_level integer, p_start_after text, p_sort_order text, p_sort_column text, p_sort_column_after text) RETURNS TABLE(key text, name text, id uuid, updated_at timestamp with time zone, created_at timestamp with time zone, last_accessed_at timestamp with time zone, metadata jsonb)
LANGUAGE plpgsql STABLE
AS $_$
DECLARE
v_cursor_op text;
v_query text;
v_prefix text;
BEGIN
v_prefix := coalesce(p_prefix, '');
IF p_sort_order = 'asc' THEN
v_cursor_op := '>';
ELSE
v_cursor_op := '<';
END IF;
v_query := format($sql$
WITH raw_objects AS (
SELECT
o.name AS obj_name,
o.id AS obj_id,
o.updated_at AS obj_updated_at,
o.created_at AS obj_created_at,
o.last_accessed_at AS obj_last_accessed_at,
o.metadata AS obj_metadata,
storage.get_common_prefix(o.name, $1, '/') AS common_prefix
FROM storage.objects o
WHERE o.bucket_id = $2
AND o.name COLLATE "C" LIKE $1 || '%%'
),
-- Aggregate common prefixes (folders)
-- Both created_at and updated_at use MIN(obj_created_at) to match the old prefixes table behavior
aggregated_prefixes AS (
SELECT
rtrim(common_prefix, '/') AS name,
NULL::uuid AS id,
MIN(obj_created_at) AS updated_at,
MIN(obj_created_at) AS created_at,
NULL::timestamptz AS last_accessed_at,
NULL::jsonb AS metadata,
TRUE AS is_prefix
FROM raw_objects
WHERE common_prefix IS NOT NULL
GROUP BY common_prefix
),
leaf_objects AS (
SELECT
obj_name AS name,
obj_id AS id,
obj_updated_at AS updated_at,
obj_created_at AS created_at,
obj_last_accessed_at AS last_accessed_at,
obj_metadata AS metadata,
FALSE AS is_prefix
FROM raw_objects
WHERE common_prefix IS NULL
),
combined AS (
SELECT * FROM aggregated_prefixes
UNION ALL
SELECT * FROM leaf_objects
),
filtered AS (
SELECT *
FROM combined
WHERE (
$5 = ''
OR ROW(
date_trunc('milliseconds', %I),
name COLLATE "C"
) %s ROW(
COALESCE(NULLIF($6, '')::timestamptz, 'epoch'::timestamptz),
$5
)
)
)
SELECT
split_part(name, '/', $3) AS key,
name,
id,
updated_at,
created_at,
last_accessed_at,
metadata
FROM filtered
ORDER BY
COALESCE(date_trunc('milliseconds', %I), 'epoch'::timestamptz) %s,
name COLLATE "C" %s
LIMIT $4
$sql$,
p_sort_column,
v_cursor_op,
p_sort_column,
p_sort_order,
p_sort_order
);
RETURN QUERY EXECUTE v_query
USING v_prefix, p_bucket_id, p_level, p_limit, p_start_after, p_sort_column_after;
END;
$_$;
CREATE FUNCTION storage.search_v2(prefix text, bucket_name text, limits integer DEFAULT 100, levels integer DEFAULT 1, start_after text DEFAULT ''::text, sort_order text DEFAULT 'asc'::text, sort_column text DEFAULT 'name'::text, sort_column_after text DEFAULT ''::text) RETURNS TABLE(key text, name text, id uuid, updated_at timestamp with time zone, created_at timestamp with time zone, last_accessed_at timestamp with time zone, metadata jsonb)
LANGUAGE plpgsql STABLE
AS $$
DECLARE
v_sort_col text;
v_sort_ord text;
v_limit int;
BEGIN
-- Cap limit to maximum of 1500 records
v_limit := LEAST(coalesce(limits, 100), 1500);
-- Validate and normalize sort_order
v_sort_ord := lower(coalesce(sort_order, 'asc'));
IF v_sort_ord NOT IN ('asc', 'desc') THEN
v_sort_ord := 'asc';
END IF;
-- Validate and normalize sort_column
v_sort_col := lower(coalesce(sort_column, 'name'));
IF v_sort_col NOT IN ('name', 'updated_at', 'created_at') THEN
v_sort_col := 'name';
END IF;
-- Route to appropriate implementation
IF v_sort_col = 'name' THEN
-- Use list_objects_with_delimiter for name sorting (most efficient: O(k * log n))
RETURN QUERY
SELECT
split_part(l.name, '/', levels) AS key,
l.name AS name,
l.id,
l.updated_at,
l.created_at,
l.last_accessed_at,
l.metadata
FROM storage.list_objects_with_delimiter(
bucket_name,
coalesce(prefix, ''),
'/',
v_limit,
start_after,
'',
v_sort_ord
) l;
ELSE
-- Use aggregation approach for timestamp sorting
-- Not efficient for large datasets but supports correct pagination
RETURN QUERY SELECT * FROM storage.search_by_timestamp(
prefix, bucket_name, v_limit, levels, start_after,
v_sort_ord, v_sort_col, sort_column_after
);
END IF;
END;
$$;
CREATE FUNCTION storage.update_updated_at_column() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$;