-
-
Notifications
You must be signed in to change notification settings - Fork 701
Batch Trigger upgrades #1502
New issue
Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? No Sign in to your account
Batch Trigger upgrades #1502
Conversation
🦋 Changeset detectedLatest commit: 153af67 The changes in this PR will be included in the next version bump. This PR includes changesets to release 11 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughThe changes introduced in this pull request enhance the batch processing capabilities within the Trigger.dev framework. New methods such as Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API
participant BatchService
participant TaskService
User->>API: Initiates batch trigger request
API->>BatchService: batch.trigger()
BatchService->>TaskService: trigger tasks in batch
TaskService-->>BatchService: return task results
BatchService-->>API: return batch results
API-->>User: return response
Possibly related PRs
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 eslint
packages/trigger-sdk/src/v3/batch.tsOops! Something went wrong! :( ESLint: 8.45.0 ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct. The config "custom" was referenced from the config file in "/.eslintrc.js". If you still have problems, please stop by https://eslint.org/chat/help to chat with the team. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@trigger.dev/build
@trigger.dev/core
@trigger.dev/react-hooks
@trigger.dev/rsc
@trigger.dev/sdk
trigger.dev
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 35
🧹 Outside diff range and nitpick comments (103)
internal-packages/database/prisma/migrations/20241126094217_add_batch_id_index_to_task_run/migration.sql (1)
1-2
: Well-structured index creation for batch operations!The index is correctly implemented with
CONCURRENTLY
to avoid table locks during creation, andIF NOT EXISTS
for idempotency. This composite index will improve query performance when filtering task runs by runtime environment and batch ID, which is essential for the new batch processing features.Consider monitoring this index's usage in production to ensure it's being utilized as expected. You can use PostgreSQL's
pg_stat_user_indexes
view to track index usage statistics..changeset/wild-needles-hunt.md (1)
7-7
: Enhance the changelog entry for better clarityThe current entry could be more descriptive about the feature's capabilities and benefits.
Consider expanding it to:
-Added ability to subscribe to a batch of runs using runs.subscribeToBatch +Added the ability to subscribe to a batch of runs using runs.subscribeToBatch, enabling efficient monitoring of multiple runs simultaneously. This complements the new batch triggering capabilities (batch.trigger, batch.triggerByTask) that support up to 500 runs per request.🧰 Tools
🪛 LanguageTool
[uncategorized] ~7-~7: You might be missing the article “the” here.
Context: ...h "@trigger.dev/core": patch --- Added ability to subscribe to a batch of runs using r...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
internal-packages/database/prisma/migrations/20241120174034_add_idempotency_key_expires_at_columns/migration.sql (1)
1-5
: Consider adding indexes for better query performance.Since these columns will likely be used in WHERE clauses for finding expired keys and validating uniqueness within the validity period, consider adding indexes to optimize query performance.
Add the following indexes:
-- AlterTable ALTER TABLE "BatchTaskRun" ADD COLUMN "idempotencyKeyExpiresAt" TIMESTAMP(3); +-- CreateIndex +CREATE INDEX "idx_batch_task_run_idempotency_expires" ON "BatchTaskRun"("idempotencyKeyExpiresAt"); -- AlterTable ALTER TABLE "TaskRun" ADD COLUMN "idempotencyKeyExpiresAt" TIMESTAMP(3); +-- CreateIndex +CREATE INDEX "idx_task_run_idempotency_expires" ON "TaskRun"("idempotencyKeyExpiresAt");internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql (1)
15-16
: Verify index effectiveness for common queriesThe new unique index on
(runtimeEnvironmentId, idempotencyKey)
looks good. This will enforce uniqueness and potentially improve query performance for lookups using these columns.Consider adding a regular index on
taskIdentifier
if it's frequently used in queries, as the old composite index that included it has been removed.apps/webapp/app/routes/api.v3.runs.$runId.ts (1)
26-29
: Consider enhancing retry-related headersWhile adding the
x-should-retry
header is good, consider also including:
Retry-After
header to specify the recommended wait timex-max-retries
header to indicate the maximum number of retry attemptsreturn json( { error: "Run not found" }, - { status: 404, headers: { "x-should-retry": "true" } } + { + status: 404, + headers: { + "x-should-retry": "true", + "retry-after": "5", + "x-max-retries": "3" + } + } );apps/webapp/app/routes/api.v1.batches.$batchId.ts (1)
6-9
: Consider strengthening the batchId validation.While the current schema ensures
batchId
is a string, consider adding more specific validation rules to catch invalid IDs earlier in the request lifecycle.const ParamsSchema = z.object({ - batchId: z.string(), + batchId: z.string().uuid("Invalid batch ID format"), });apps/webapp/app/utils/idempotencyKeys.server.ts (2)
1-9
: Enhance documentation with examples and error casesThe JSDoc documentation could be more helpful with:
- Examples of valid TTL formats
- Specification of the error type thrown
- Documentation of when undefined is returned
/** * Resolve the TTL for an idempotency key. * * The TTL format is a string like "5m", "1h", "7d" + * + * @example + * resolveIdempotencyKeyTTL("30s") // 30 seconds from now + * resolveIdempotencyKeyTTL("5m") // 5 minutes from now + * resolveIdempotencyKeyTTL("2h") // 2 hours from now + * resolveIdempotencyKeyTTL("7d") // 7 days from now * * @param ttl The TTL string * @returns The date when the key will expire - * @throws If the TTL string is invalid + * @returns undefined When ttl is null, undefined, or has invalid format */
10-13
: Consider explicit empty string handlingThe current implementation treats empty strings as falsy values, returning undefined. While this might be intentional, it could be more explicit to prevent confusion.
export function resolveIdempotencyKeyTTL(ttl: string | undefined | null): Date | undefined { - if (!ttl) { + if (ttl === undefined || ttl === null || ttl === "") { return undefined; }apps/webapp/app/presenters/v3/ApiRetrieveBatchPresenter.server.ts (2)
6-9
: Consider adding input validation for friendlyId.The method accepts a friendlyId string but doesn't validate its format. Consider adding validation to prevent unnecessary database queries.
public async call( friendlyId: string, env: AuthenticatedEnvironment ): Promise<RetrieveBatchResponse | undefined> { + if (!friendlyId?.trim()) { + throw new Error("Invalid friendlyId provided"); + }
22-30
: Consider using a type-safe mapper function for response transformation.The direct mapping of database fields to response type could be made more maintainable and type-safe using a dedicated mapper function.
+ const toBatchResponse = (batch: BatchTaskRun): RetrieveBatchResponse => ({ + id: batch.friendlyId, + status: batch.status, + idempotencyKey: batch.idempotencyKey ?? undefined, + createdAt: batch.createdAt, + updatedAt: batch.updatedAt, + runCount: batch.runCount, + }); + - return { - id: batch.friendlyId, - status: batch.status, - idempotencyKey: batch.idempotencyKey ?? undefined, - createdAt: batch.createdAt, - updatedAt: batch.updatedAt, - runCount: batch.runCount, - }; + return toBatchResponse(batch);apps/webapp/app/v3/queueSizeLimits.server.ts (2)
27-27
: Consider adding bounds checking for itemsToAdd.While the calculation is correct, consider adding validation to ensure
itemsToAdd
is non-negative to prevent potential queue size miscalculations.+ if (itemsToAdd < 0) { + throw new Error("itemsToAdd must be non-negative"); + } const projectedSize = queueSize + itemsToAdd;
Line range hint
1-43
: Consider adding queue size monitoring.Given that this code now handles batch operations, consider adding monitoring/metrics for:
- Queue size trends over time
- Frequency of approaching size limits
- Batch operation sizes
This would help in proactive capacity planning and limit adjustments.
packages/core/src/v3/apiClient/types.ts (1)
34-34
: Add JSDoc documentation for the batch property.The new
batch
property would benefit from JSDoc documentation explaining its purpose and usage, similar to how other properties in this codebase are documented.Consider adding documentation like this:
+ /** + * Filter runs by batch identifier. Used when querying runs that were triggered as part of a batch operation. + */ batch?: string;packages/trigger-sdk/src/v3/batch.ts (2)
17-23
: Well-designed batch API surfaceThe batch object provides a clean and intuitive API with well-named methods. The separation between immediate and wait operations, as well as ID-based and task-based triggers, offers good flexibility to consumers.
Consider documenting the following aspects for API consumers:
- Maximum batch size limitations
- Error handling behavior for partial batch failures
- Performance implications of using
triggerAndWait
vstrigger
25-53
: Add input validation for batchIdThe function includes good observability with tracing and attributes. However, consider adding input validation for the batchId parameter.
Consider adding this validation at the start of the function:
function retrieveBatch( batchId: string, requestOptions?: ApiRequestOptions ): ApiPromise<RetrieveBatchResponse> { + if (!batchId || typeof batchId !== "string") { + throw new Error("batchId must be a non-empty string"); + } const apiClient = apiClientManager.clientOrThrow();.changeset/ten-pans-itch.md (2)
13-21
: Enhance TTL documentationThe code examples are helpful, but consider documenting:
- Valid TTL format patterns (beyond just "60s")
- Behavior when keys expire
- Default TTL value if not specified
Would you like me to help draft expanded documentation for the TTL feature?
23-25
: Fix grammatical issue in breaking changes sectionThe breaking change is well-documented with clear reasoning. However, there's a minor grammatical issue:
-We've removed the `idempotencyKey` option from `triggerAndWait` and `batchTriggerAndWait`, because it can lead to permanently frozen runs in deployed tasks. +We've removed the `idempotencyKey` option from `triggerAndWait` and `batchTriggerAndWait` because it can lead to permanently frozen runs in deployed tasks.Remove the comma before "because" as the clause is essential to the meaning.
🧰 Tools
🪛 LanguageTool
[formatting] ~25-~25: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...riggerAndWaitand
batchTriggerAndWait`, because it can lead to permanently frozen runs ...(COMMA_BEFORE_BECAUSE)
packages/react-hooks/src/hooks/useApiClient.ts (1)
45-45
: Update JSDoc to reflect the new return type.The function's return type has changed to
ApiClient | undefined
, but this isn't reflected in the JSDoc comment above. Please update the@returns
tag to document whenundefined
might be returned.- * @returns {ApiClient} An initialized API client instance + * @returns {ApiClient | undefined} An initialized API client instance, or undefined if enabled is false and no access token is availablepackages/react-hooks/src/hooks/useRun.ts (1)
47-53
: Consider simplifying the refresh interval logic.The current implementation has nested conditionals and repeats the default value. Consider a more concise approach.
- refreshInterval: (run) => { - if (!run) return options?.refreshInterval ?? 0; - - if (run.isCompleted) return 0; - - return options?.refreshInterval ?? 0; - }, + refreshInterval: (run) => + !run || !run.isCompleted ? (options?.refreshInterval ?? 0) : 0,.changeset/perfect-onions-call.md (4)
7-7
: Fix grammar in feature descriptionChange "methods that allows" to "methods that allow" for correct subject-verb agreement.
🧰 Tools
🪛 LanguageTool
[grammar] ~7-~7: Possible subject-verb agreement error detected.
Context: ...er and batch.triggerByTask methods that allows triggering multiple different tasks in ...(PLURAL_THAT_AGREEMENT)
14-25
: Consider documenting error types and adding input validationThe error handling is good, but it would be helpful to:
- Document possible error types in
run.error
- Add input validation for the payload structure
- Add example error handling for common scenarios
Would you like me to provide an enhanced example with these improvements?
28-28
: Simplify phrasingChange "inside of a task" to "inside a task" for better readability.
🧰 Tools
🪛 LanguageTool
[style] ~28-~28: This phrase is redundant. Consider using “inside”.
Context: ...or(run.error); } } ``` Or if you are inside of a task, you can usetriggerByTask
: `...(OUTSIDE_OF)
62-78
: Add input type definitions to tasksThe example tasks should demonstrate proper input type definitions for better type safety.
export const myTask1 = task({ id: 'myTask1', + input: z.object({ + foo: z.string() + }), run: async () => { return { foo: 'bar' } } }); export const myTask2 = task({ id: 'myTask2', + input: z.object({ + baz: z.string() + }), run: async () => { return { baz: 'qux' } } });apps/webapp/app/components/primitives/Switch.tsx (1)
16-19
: LGTM! Consider consolidating transition properties.The styling changes for the small variant improve the component's visual feedback and accessibility, particularly for disabled states. The hover styles are properly organized and the removal of duplicated styles in the text class improves maintainability.
Consider consolidating the transition properties for better consistency:
- "flex items-center h-[1.5rem] gap-x-1.5 rounded hover:bg-tertiary disabled:hover:bg-transparent pr-1 py-[0.1rem] pl-1.5 transition focus-custom disabled:hover:text-charcoal-400 disabled:opacity-50 text-charcoal-400 hover:text-charcoal-200 disabled:hover:cursor-not-allowed hover:cursor-pointer", + "flex items-center h-[1.5rem] gap-x-1.5 rounded hover:bg-tertiary disabled:hover:bg-transparent pr-1 py-[0.1rem] pl-1.5 focus-custom disabled:hover:text-charcoal-400 disabled:opacity-50 text-charcoal-400 hover:text-charcoal-200 disabled:hover:cursor-not-allowed hover:cursor-pointer transition-all", root: "h-3 w-6", thumb: "h-2.5 w-2.5 data-[state=checked]:translate-x-2.5 data-[state=unchecked]:translate-x-0", - text: "text-xs transition", + text: "text-xs transition-all",This change:
- Uses
transition-all
consistently across both classes- Ensures all properties transition smoothly, including opacity and colors
apps/webapp/app/components/runs/v3/BatchStatus.tsx (4)
1-14
: LGTM! Consider adding JSDoc comments.The imports and type definitions are well-structured with good type safety using the
satisfies
operator. The descriptions are clear and concise.Consider adding JSDoc comments to document the purpose of
allBatchStatuses
and its relationship withBatchTaskRunStatus
:+/** + * All possible states for a batch task run. + * @type {readonly ["PENDING", "COMPLETED"]} + */ export const allBatchStatuses = ["PENDING", "COMPLETED"] as const satisfies Readonly< Array<BatchTaskRunStatus> >;
20-57
: Enhance type safety and accessibility.The components are well-structured, but there's room for improvement:
- Make className types more specific:
- className?: string; + className?: `flex items-center gap-${number}` | string;
- Make BatchStatusIcon's className optional:
- className: string; + className?: string;
- Add accessibility attributes:
export function BatchStatusIcon({ status, className, }: { status: BatchTaskRunStatus; className?: string; }) { switch (status) { case "PENDING": - return <Spinner className={cn(batchStatusColor(status), className)} />; + return <Spinner className={cn(batchStatusColor(status), className)} aria-label={`Status: ${batchStatusTitle(status)}`} />; case "COMPLETED": - return <CheckCircleIcon className={cn(batchStatusColor(status), className)} />; + return <CheckCircleIcon className={cn(batchStatusColor(status), className)} aria-label={`Status: ${batchStatusTitle(status)}`} />;
59-81
: Consider using constants for status colors and titles.The helper functions are well-implemented with proper type safety. Consider extracting the colors and titles into constants for better maintainability:
+const BATCH_STATUS_COLORS: Record<BatchTaskRunStatus, string> = { + PENDING: "text-pending", + COMPLETED: "text-success", +} as const; + +const BATCH_STATUS_TITLES: Record<BatchTaskRunStatus, string> = { + PENDING: "Pending", + COMPLETED: "Completed", +} as const; + export function batchStatusColor(status: BatchTaskRunStatus): string { - switch (status) { - case "PENDING": - return "text-pending"; - case "COMPLETED": - return "text-success"; - default: { - assertNever(status); - } - } + return BATCH_STATUS_COLORS[status]; } export function batchStatusTitle(status: BatchTaskRunStatus): string { - switch (status) { - case "PENDING": - return "Pending"; - case "COMPLETED": - return "Completed"; - default: { - assertNever(status); - } - } + return BATCH_STATUS_TITLES[status]; }This approach:
- Centralizes status-related values
- Makes it easier to update values
- Maintains type safety through Record type
1-81
: Consider future extensibility for batch statuses.The current implementation handles PENDING and COMPLETED states well, but consider planning for potential future status additions:
- Document the process for adding new statuses
- Consider implementing a status transition system
- Think about grouping statuses (e.g., terminal states, in-progress states)
This will make it easier to extend the system when new requirements arise.
packages/trigger-sdk/package.json (1)
Line range hint
41-65
: Review OpenTelemetry dependency versionsThe OpenTelemetry dependencies are pinned to specific versions:
- @opentelemetry/api: 1.9.0
- @opentelemetry/api-logs: 0.52.1
- @opentelemetry/semantic-conventions: 1.25.1
Consider setting up automated dependency updates for security patches while maintaining compatibility.
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (1)
45-45
: Consider enhancing debug loggingWhile the current logging is good, it would be more helpful to include the resolved TTL value after processing.
Consider moving the logging after line 73 and including the resolved value:
- logger.debug("Triggering task", { - taskId: params.taskId, - idempotencyKey, - idempotencyKeyTTL, - triggerVersion, - headers, - options: body.options, - isFromWorker, - traceContext, - }); + const idempotencyKeyExpiresAt = resolveIdempotencyKeyTTL(idempotencyKeyTTL); + + logger.debug("Triggering task", { + taskId: params.taskId, + idempotencyKey, + idempotencyKeyTTL, + idempotencyKeyExpiresAt, + triggerVersion, + headers, + options: body.options, + isFromWorker, + traceContext, + });Also applies to: 65-65
apps/webapp/app/v3/services/batchTriggerTask.server.ts (2)
Line range hint
91-125
: Consider adding timeout handling for batch operationsWhile the error handling for individual tasks is robust, the batch operation itself lacks timeout protection. For large batches, this could lead to long-running operations that tie up resources.
Consider adding:
- A timeout parameter to the service options
- A mechanism to abort remaining tasks if the batch execution exceeds the timeout
- Status tracking for partially completed batches
export type BatchTriggerTaskServiceOptions = { idempotencyKey?: string; triggerVersion?: string; traceContext?: Record<string, string | undefined>; spanParentAsLink?: boolean; + batchTimeout?: number; // Maximum time in ms for the entire batch operation };
Line range hint
126-152
: Optimize batch processing performanceThe current implementation processes tasks sequentially and doesn't enforce batch size limits, which could lead to performance issues with large batches.
Consider these improvements:
- Process tasks in parallel using Promise.all with a concurrency limit
- Add a maximum batch size limit
- Add batch progress tracking
Example implementation:
const BATCH_CONCURRENCY_LIMIT = 10; const MAX_BATCH_SIZE = 500; // Process in chunks with controlled concurrency for (let i = 0; i < body.items.length; i += BATCH_CONCURRENCY_LIMIT) { const chunk = body.items.slice(i, i + BATCH_CONCURRENCY_LIMIT); await Promise.all( chunk.map(item => processItem(item)) ); }packages/react-hooks/src/hooks/useTaskTrigger.ts (1)
69-71
: Enhance error message with troubleshooting guidanceWhile the error handling is good, the message could be more helpful by including troubleshooting steps.
- throw new Error("Could not trigger task in useTaskTrigger: Missing access token"); + throw new Error("Could not trigger task in useTaskTrigger: Missing access token. Ensure you have provided an access token through TriggerProvider or useApiClient options.");references/v3-catalog/src/management.ts (2)
265-271
: Add error handling and response validation.The batch retrieval and run listing operations should include error handling and response validation.
Consider this improved implementation:
- const $batch = await batch.retrieve(response.batchId); - console.log("batch", $batch); - - const $runs = await runs.list({ batch: response.batchId }); - console.log("batch runs", $runs.data); + try { + const $batch = await batch.retrieve(response.batchId); + if (!$batch) { + throw new Error(`Failed to retrieve batch ${response.batchId}`); + } + console.log("batch", $batch); + + const $runs = await runs.list({ batch: response.batchId }); + if (!$runs?.data) { + throw new Error(`Failed to list runs for batch ${response.batchId}`); + } + console.log("batch runs", $runs.data); + } catch (error) { + console.error("Error processing batch:", error); + throw error; + }
Line range hint
1-277
: Consider adding file-level documentation.Since this file serves as an example of SDK usage, it would be helpful to add a file-level comment explaining its purpose, the examples it contains, and how to use them effectively.
Add documentation at the top of the file:
+/** + * This file contains example implementations demonstrating various features of the Trigger.dev SDK. + * It includes examples for: + * - Environment variables management + * - Run management and monitoring + * - Schedule management + * - Batch processing + * + * To test different features, uncomment the relevant function call at the bottom of the file. + */ import { configure, envvars, runs, schedules, batch } from "@trigger.dev/sdk/v3";packages/core/src/v3/schemas/common.ts (3)
254-255
: LGTM! Consider documenting the taskIdentifier format.The addition of the optional
taskIdentifier
field is well-structured and maintains backward compatibility. Consider adding a JSDoc comment to document the expected format and purpose of the taskIdentifier.+ /** + * A unique identifier for the task within a batch operation. + * Optional for backward compatibility with non-batch operations. + */ taskIdentifier: z.string().optional(),
266-267
: LGTM! Maintain consistency with failed execution schema documentation.The addition matches the failed execution schema, maintaining symmetry between success and failure cases.
Apply the same documentation suggestion as the failed execution schema for consistency.
254-255
: Consider future schema evolution strategy.The current approach of marking fields as optional for backward compatibility is good for this transition phase. However, consider:
- Documenting when these fields will become required
- Adding a schema version field to facilitate future migrations
- Creating a migration guide for users
This will help manage the schema evolution as the batch processing features mature.
Also applies to: 266-267
apps/webapp/app/services/realtimeClient.server.ts (3)
55-62
: LGTM! Consider adding input validation.The array-based WHERE clause construction is a clean approach. However, consider adding input validation for the
batchId
parameter to prevent potential SQL injection.async streamBatch( url: URL | string, environment: RealtimeEnvironment, batchId: string, clientVersion?: string ) { + if (!batchId?.match(/^[a-zA-Z0-9-_]+$/)) { + throw new Error("Invalid batchId format"); + } const whereClauses: string[] = [ `"runtimeEnvironmentId"='${environment.id}'`, `"batchId"='${batchId}'`, ]; const whereClause = whereClauses.join(" AND "); return this.#streamRunsWhere(url, environment, whereClause, clientVersion); }
Line range hint
141-150
: Enhance error messages and extract magic numbers.The error handling is robust, but consider these improvements:
- Make error messages more descriptive for API consumers
- Extract the magic number
100_000
as a named constant+const DEFAULT_CONCURRENCY_LIMIT = 100_000; + const concurrencyLimit = await this.cachedLimitProvider.getCachedLimit( environment.organizationId, - 100_000 + DEFAULT_CONCURRENCY_LIMIT ); if (!concurrencyLimit) { logger.error("Failed to get concurrency limit", { organizationId: environment.organizationId, }); - return json({ error: "Failed to get concurrency limit" }, { status: 500 }); + return json({ + error: "Internal server error", + message: "Failed to retrieve concurrency limits. Please try again later.", + code: "CONCURRENCY_LIMIT_ERROR" + }, { status: 500 }); }
Line range hint
210-244
: Document the Redis Lua script implementation.The Redis Lua script for concurrency control is complex and critical for the system's operation. Consider adding detailed documentation explaining:
- The purpose and behavior of each step in the script
- The meaning of return values
- The rationale behind using a sorted set for tracking requests
#registerCommands() { this.redis.defineCommand("incrementAndCheckConcurrency", { numberOfKeys: 1, lua: /* lua */ ` + -- Purpose: This script implements a sliding window rate limiter using Redis sorted sets + -- Input: + -- KEYS[1]: The key for the sorted set storing concurrent requests + -- ARGV[1]: Current timestamp + -- ARGV[2]: Request ID + -- ARGV[3]: Expiry time in seconds + -- ARGV[4]: Cutoff time for old requests + -- ARGV[5]: Maximum concurrent requests limit + -- Returns: + -- 1: if the request is allowed + -- 0: if the concurrent limit is exceeded + local concurrencyKey = KEYS[1] local timestamp = tonumber(ARGV[1]) local requestId = ARGV[2]apps/webapp/app/components/primitives/DateField.tsx (2)
1-1
: Remove unused import.The
XMarkIcon
is imported but never used in the code.-import { BellAlertIcon, XMarkIcon } from "@heroicons/react/20/solid"; +import { BellAlertIcon } from "@heroicons/react/20/solid";
180-180
: Add a comment explaining the month adjustment.The fix for month handling is correct. JavaScript's
getUTCMonth()
returns 0-based months (0-11), whileCalendarDateTime
expects 1-based months (1-12).- date.getUTCMonth() + 1, + date.getUTCMonth() + 1, // Add 1 to convert from 0-based (JS) to 1-based (CalendarDateTime) monthspackages/cli-v3/src/entryPoints/dev-run-worker.ts (4)
226-226
: Consider enhancing the error message with more contextWhile adding
taskIdentifier
improves traceability, consider including additional context in the error message, such as the searched locations or available tasks, to help diagnose why the task wasn't found.error: { type: "INTERNAL_ERROR", code: TaskRunErrorCodes.COULD_NOT_FIND_TASK, + message: `Task ${execution.task.id} not found in worker manifest. Available tasks: ${workerManifest.tasks.map(t => t.id).join(', ')}`, },
252-252
: Consider including the import path in error detailsWhile the error handling is thorough with stack traces, including the attempted import path would make debugging easier, especially in batch processing scenarios.
error: { type: "INTERNAL_ERROR", code: TaskRunErrorCodes.COULD_NOT_IMPORT_TASK, message: err instanceof Error ? err.message : String(err), + context: { entryPoint: taskManifest.entryPoint }, stackTrace: err instanceof Error ? err.stack : undefined, },
331-331
: Enhance timeout error with duration detailsConsider including both the actual duration and the maximum allowed duration in the error context for better debugging of timeout issues.
error: { type: "INTERNAL_ERROR", code: TaskRunErrorCodes.MAX_DURATION_EXCEEDED, message: signal.reason instanceof Error ? signal.reason.message : String(signal.reason), + context: { + maxDuration: execution.run.maxDuration, + actualDuration: usageSample.cpuTime, + }, },
Line range hint
197-355
: Consider implementing structured logging for improved observabilityThe addition of
taskIdentifier
across all responses improves traceability for batch processing. To further enhance observability, consider implementing structured logging throughout the task execution lifecycle. This would provide better insights into task execution patterns, especially in batch scenarios.Key areas for structured logging:
- Task execution start/end
- State transitions
- Resource usage metrics
- Batch processing statistics
This would complement the current error handling and make debugging batch processing issues more straightforward.
apps/webapp/app/env.server.ts (1)
236-236
: LGTM! Consider adding an explanatory comment.The 1MB limit for batch task payloads (compared to 3MB for regular tasks) seems reasonable to prevent system overload during batch processing.
Consider adding a comment explaining why batch tasks have a smaller payload limit than regular tasks:
- BATCH_TASK_PAYLOAD_MAXIMUM_SIZE: z.coerce.number().int().default(1_000_000), // 1MB + // Batch tasks have a smaller payload limit (1MB vs 3MB for regular tasks) to prevent system overload during batch processing + BATCH_TASK_PAYLOAD_MAXIMUM_SIZE: z.coerce.number().int().default(1_000_000), // 1MBpackages/cli-v3/src/entryPoints/deploy-run-worker.ts (3)
364-364
: LGTM: Enhanced timeout trackingThe taskIdentifier addition helps identify which tasks are timing out, enabling better monitoring of task duration patterns.
Consider implementing task duration metrics collection to proactively identify tasks that consistently approach their timeout limits.
411-411
: LGTM: Enhanced configuration error trackingThe taskIdentifier addition helps track which task encountered configuration issues.
Consider categorizing configuration errors more specifically to help with troubleshooting. For example:
- code: TaskRunErrorCodes.CONFIGURED_INCORRECTLY, + code: err instanceof ConfigValidationError + ? TaskRunErrorCodes.CONFIG_VALIDATION_ERROR + : TaskRunErrorCodes.CONFIGURED_INCORRECTLY,
Line range hint
219-411
: Architectural Improvement: Enhanced Task TrackingThe consistent addition of taskIdentifier across all response types improves the system's observability and error tracking capabilities. This enhancement enables:
- Better correlation of errors with specific tasks
- Improved debugging of task execution issues
- More effective monitoring and metrics collection
Consider leveraging this enhanced task identification to implement:
- Task-specific error rate monitoring
- Performance tracking by task type
- Automated task health dashboards
packages/cli-v3/src/dev/backgroundWorker.ts (1)
501-501
: LGTM! Consider enhancing error context further.The addition of
taskIdentifier
to the error object improves error tracking in batch operations. This is particularly valuable for identifying which specific task failed when executing multiple tasks in a batch.Consider enriching the error context further by including additional task metadata that could be useful for debugging:
- taskIdentifier: payload.execution.task.id, + taskIdentifier: payload.execution.task.id, + taskContext: { + version: payload.execution.task.version, + runId: payload.execution.run.id, + attemptNumber: payload.execution.attempt.number, + },This would provide more context for debugging batch operation failures, especially when tasks are retried multiple times.
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.runs._index/route.tsx (1)
66-71
: Consider adding date validation for the time range parameters.The new parameters are well-structured and align with the batch processing feature. However, consider validating the
from
andto
date parameters to ensure they are in the correct format and represent a valid date range.- from: url.searchParams.get("from") ?? undefined, - to: url.searchParams.get("to") ?? undefined, + from: validateDate(url.searchParams.get("from")), + to: validateDate(url.searchParams.get("to")), // Add this helper function +function validateDate(date: string | null): string | undefined { + if (!date) return undefined; + const parsed = new Date(date); + return !isNaN(parsed.getTime()) ? date : undefined; +}packages/react-hooks/src/hooks/useRealtime.ts (1)
76-77
: Consider adding error logging for missing apiClient.While the validation checks are good, silently returning when apiClient is missing might make debugging harder. Consider adding debug logging or development-time warnings when apiClient is missing.
if (!runId || !apiClient) { + if (process.env.NODE_ENV === 'development' && !apiClient) { + console.warn('apiClient is not available for realtime updates'); + } return; }Also applies to: 210-211, 324-326, 414-416
apps/webapp/app/utils/pathBuilder.ts (1)
444-450
: Consider defining a BatchForPath type for consistency.The implementation looks good, but for better type safety and consistency with other path types in the file, consider defining a dedicated type.
+export type BatchForPath = Pick<Batch, "friendlyId">; export function v3BatchRunsPath( organization: OrgForPath, project: ProjectForPath, - batch: { friendlyId: string } + batch: BatchForPath ) { return `${v3ProjectPath(organization, project)}/runs?batchId=${batch.friendlyId}`; }apps/webapp/app/services/worker.server.ts (1)
732-740
: Consider enhancing the batch task handler implementationWhile the basic implementation is functional, consider the following improvements:
- Add error handling and logging for better observability
- Document the rationale for the maxAttempts value
- Add comments explaining the task's purpose and retry strategy
Consider applying these improvements:
"v3.processBatchTaskRun": { priority: 0, maxAttempts: 5, handler: async (payload, job) => { const service = new BatchTriggerV2Service(); + try { + logger.info("Processing batch task run", { + attempt: job.attempts, + maxAttempts: job.max_attempts, + payload + }); await service.processBatchTaskRun(payload); + } catch (error) { + logger.error("Failed to process batch task run", { + error, + attempt: job.attempts, + payload + }); + throw error; + } }, },internal-packages/database/prisma/schema.prisma (2)
1664-1666
: Consider adding database-level constraints for idempotencyKey expiration.The addition of
idempotencyKey
andidempotencyKeyExpiresAt
fields is good for managing duplicate task runs. However, consider adding a database-level check constraint to ensureidempotencyKeyExpiresAt
is always greater than the current timestamp whenidempotencyKey
is set.CHECK ( idempotencyKey IS NULL OR (idempotencyKeyExpiresAt IS NOT NULL AND idempotencyKeyExpiresAt > NOW()) )
2145-2163
: Consider adding validation for BatchTaskRun fields.The new batch processing fields look good, but consider adding these validations:
runCount
should match the length ofrunIds
arraypayloadType
should have a default of "application/json" to match TaskRunbatchVersion
should be constrained to known versionsmodel BatchTaskRun { // ... other fields ... runIds String[] @default([]) runCount Int @default(0) payload String? payloadType String @default("application/json") options Json? batchVersion String @default("v1") @@check(runCount = array_length(runIds, 1)) @@check(payloadType IN ('application/json')) @@check(batchVersion IN ('v1')) }apps/webapp/app/v3/r2.server.ts (2)
73-75
: Refactor common environment checks into a helper functionThe checks for
r2
andenv.OBJECT_STORE_BASE_URL
are duplicated across multiple functions. Consider refactoring these checks into a shared helper function to improve maintainability and reduce code duplication.Also applies to: 77-79, 116-118, 120-122
92-103
: SetdataType
based on response'sContent-Type
header and validate itCurrently, the code assumes the response content is text and of type
'application/json'
. To ensure correctness, consider validating the response'sContent-Type
header and settingdataType
accordingly.Suggested code:
const contentType = response.headers.get('Content-Type') || 'application/json'; + if (contentType !== 'application/json') { + throw new Error(`Unexpected content type: ${contentType}`); + } const data = await response.text(); const rawPacket = { data, - dataType: "application/json", + dataType: contentType, };apps/webapp/app/routes/api.v1.tasks.batch.ts (1)
135-142
: Potential JWT size issue due to including all run scopesIncluding a scope for each run in
batch.runs
may result in a large JWT if there are many runs, possibly exceeding header size limits or impacting performance. Consider grouping permissions or using a more efficient way to represent multiple runs.apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (2)
98-98
: Use 'bigint' instead of 'BigInt' for type annotationIn TypeScript, the primitive type
bigint
should be used instead ofBigInt
for type annotations to maintain consistency and prevent potential issues.Apply this diff to fix the type annotation:
- runCount: BigInt; + runCount: bigint;🧰 Tools
🪛 Biome (1.9.4)
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead(lint/complexity/noBannedTypes)
145-165
: Refactor pagination logic for clarity and correctnessThe pagination logic can be simplified and made more robust. Specifically, handling the
hasMore
condition and reversing the batches array can be improved to prevent potential off-by-one errors and enhance readability.Consider refactoring the pagination logic as follows:
- const hasMore = batches.length > pageSize; - //get cursors for next and previous pages - let next: string | undefined; - let previous: string | undefined; - switch (direction) { - case "forward": - previous = cursor ? batches.at(0)?.id : undefined; - if (hasMore) { - next = batches[pageSize - 1]?.id; - } - break; - case "backward": - batches.reverse(); - if (hasMore) { - previous = batches[1]?.id; - next = batches[pageSize]?.id; - } else { - next = batches[pageSize - 1]?.id; - } - break; - } + const hasMore = batches.length === pageSize + 1; + if (hasMore) { + batches.pop(); + } + let next: string | undefined; + let previous: string | undefined; + if (direction === "forward") { + next = hasMore ? batches[batches.length - 1]?.id : undefined; + previous = cursor; + } else { + batches.reverse(); + next = cursor; + previous = hasMore ? batches[0]?.id : undefined; + }apps/webapp/app/v3/services/resumeBatchRun.server.ts (5)
31-36
: Clarify the error message when batch run is not foundThe error message logged when the
batchRun
is not found may be misleading. Currently, it states that the batch run doesn't exist or doesn't have a dependent attempt, but the condition only checks for the existence ofbatchRun
. Since we're not checking for the dependent attempt here, consider updating the error message for accuracy.Apply this diff to update the error message:
-logger.error( - "ResumeBatchRunService: Batch run doesn't exist or doesn't have a dependent attempt", +logger.error( + "ResumeBatchRunService: Batch run not found", { batchRunId, } );
Line range hint
64-71
: EnsureruntimeEnvironment
is not null before accessing its propertiesWhile the
runtimeEnvironment
is included in thebatchRun
query, it's good practice to null-check it before accessing its properties to prevent potential runtime errors if the data is ever missing or becomes inconsistent.Apply this diff to add a null check:
-if (batchRun.runtimeEnvironment.type === "DEVELOPMENT" || !batchRun.dependentTaskAttemptId) { +if ( + batchRun.runtimeEnvironment?.type === "DEVELOPMENT" || + !batchRun.dependentTaskAttemptId +) {
Line range hint
109-136
: Review race condition handling and logging for clarityThe current logic handles scenarios where the
dependentTaskAttempt
is paused, and there is acheckpointEventId
. However, consider the following improvements:
- Race Condition Comment Clarity: The comment at line 155 about race conditions could be clarified for better understanding.
- Consolidate Logging Information: Some log messages include redundant information. Streamlining them can improve log readability.
- Ensure Correct Handling of Checkpoint Events: Double-check that the logic correctly handles cases with and without a
checkpointEventId
.Here's an example of improving the race condition comment:
-// In case of race conditions the status can be PAUSED without a checkpoint event -// When the checkpoint is created, it will continue the run +// Due to potential race conditions, the status might be PAUSED without a checkpoint event. +// Once the checkpoint is created, the run will continue automatically.Consider reviewing the entire logic block to ensure that all possible states are correctly handled and logged.
155-163
: Offer assistance in handling the paused attempts without checkpointsIf this scenario is expected and requires handling, I can help draft code to manage paused attempts without checkpoints to ensure the system remains robust.
Would you like me to generate code to handle paused
dependentTaskAttempt
s that lack acheckpointEventId
, or open a GitHub issue to track this?
77-93
: Optimize database query fordependentTaskAttempt
Currently, the query selects multiple fields from
dependentTaskAttempt
and its relatedtaskRun
. If not all fields are necessary, consider selecting only the fields that are used later in the code to minimize data retrieval.Apply this diff to select only necessary fields:
select: { status: true, id: true, taskRun: { select: { id: true, queue: true, taskIdentifier: true, concurrencyKey: true, + // Remove fields that are not used }, }, },
Review and adjust the selected fields as needed.
apps/webapp/app/v3/services/finalizeTaskRun.server.ts (1)
192-211
: Optimize batch item processing with concurrent updatesCurrently, batch items are processed sequentially within a
for-of
loop, which can be inefficient for larger batches. Consider processing the updates concurrently usingPromise.all
to improve performance.Apply this diff to process batch items concurrently:
- for (const item of batchItems) { - // Existing code inside the loop - if (environment.type !== "DEVELOPMENT" && item.batchTaskRun.dependentTaskAttemptId) { - continue; - } - - await this._prisma.batchTaskRunItem.update({ - where: { id: item.id }, - data: { status: "COMPLETED" }, - }); - - await ResumeBatchRunService.enqueue(item.batchTaskRunId, this._prisma); - } + await Promise.all(batchItems.map(async (item) => { + // Existing code inside the loop + if (environment.type !== "DEVELOPMENT" && item.batchTaskRun.dependentTaskAttemptId) { + return; + } + + await this._prisma.batchTaskRunItem.update({ + where: { id: item.id }, + data: { status: "COMPLETED" }, + }); + + await ResumeBatchRunService.enqueue(item.batchTaskRunId, this._prisma); + }));apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (8)
6-6
: Consider removing unused importformatDuration
The
formatDuration
function imported from@trigger.dev/core/v3/utils/durations
at line 6 does not appear to be used in the code. Unused imports can lead to unnecessary code bloat and potential confusion.Apply this diff to remove the unused import:
- import { formatDuration } from "@trigger.dev/core/v3/utils/durations";
95-97
: Remove redundant spread operator inpresenter.call
argumentsThe
...filters
spread includes theid
property, and thenfriendlyId
is set tofilters.id
. This might be redundant iffriendlyId
is not a separate parameter.If
friendlyId
is meant to be the same asfilters.id
, andpresenter.call
doesn't require both, consider removing one of them.const list = await presenter.call({ userId, projectId: project.id, ...filters, - friendlyId: filters.id, });
150-251
: Optimize table rendering for large datasetsRendering a large number of table rows can impact performance. Consider implementing virtualization for the table to improve rendering performance with large datasets.
Look into libraries like
react-window
orreact-virtualized
to virtualize the table rows.
157-176
: Accessibility improvement: Provide descriptive tooltipsThe tooltip provides a description for each batch status. Ensure that this information is accessible to screen readers and consider adding
aria-label
attributes where necessary.<TableHeaderCell tooltip={ <div className="flex flex-col divide-y divide-grid-dimmed" aria-label="Batch status descriptions"> {allBatchStatuses.map((status) => ( <div key={status} className="grid grid-cols-[8rem_1fr] gap-x-2 py-2 first:pt-1 last:pb-1" > <div className="mb-0.5 flex items-center gap-1.5 whitespace-nowrap"> <BatchStatusCombo status={status} /> </div> <Paragraph variant="extra-small" className="!text-wrap text-text-dimmed"> {descriptionForBatchStatus(status)} </Paragraph> </div> ))} </div> } > Status </TableHeaderCell>
115-120
: External link should open in a new tabThe "Batches docs" link navigates to documentation. For better user experience, consider opening external documentation links in a new tab.
<LinkButton variant={"docs/small"} LeadingIcon={BookOpenIcon} to={docsPath("/triggering")} + target="_blank" + rel="noopener noreferrer" > Batches docs </LinkButton>
223-226
: Avoid creating new Date objects multiple timesIn the
formatDuration
function call,new Date(batch.createdAt)
andnew Date(batch.finishedAt)
are called. This could be optimized by creating date objects once and reusing them.+ const createdAt = new Date(batch.createdAt); + const finishedAt = new Date(batch.finishedAt); return ( formatDuration(createdAt, finishedAt, { style: "short", }) )
75-83
: Simplify filters parsing with zodYou're manually parsing the search parameters and then using
BatchListFilters.parse(s)
, which is a zod schema. You can usezod
directly to parse the URLSearchParams, reducing boilerplate and potential errors.import { z } from "zod"; const BatchListFilters = z.object({ cursor: z.string().optional(), direction: z.string().optional(), environments: z.array(z.string()), statuses: z.array(z.string()), period: z.string().optional(), from: z.string().optional(), to: z.string().optional(), id: z.string().optional(), }); const filters = BatchListFilters.parse(Object.fromEntries(url.searchParams));
146-146
: Check for navigation location changesWhen determining if the page is loading, relying solely on
navigation.state
might not be sufficient. Consider checking if the location has changed to ensure accurate loading states.const navigation = useNavigation(); const location = useLocation(); + const [currentLocation, setCurrentLocation] = React.useState(location); React.useEffect(() => { setCurrentLocation(location); }, [location]); const isLoading = navigation.state !== "idle" && location !== currentLocation;
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (4)
165-165
: Unnecessary optional chaining when assigningbatchId
Since
batch
is confirmed to exist due to the precedingif
statement, the optional chaining operator is unnecessary when accessingbatch.id
.Consider simplifying the assignment:
-batchId = batch?.id; +batchId = batch.id;
170-170
: Simplify condition using optional chainingYou can streamline the condition by using optional chaining, which makes the code more concise and readable:
-if (scheduleId && scheduleId.startsWith("sched_")) { +if (scheduleId?.startsWith("sched_")) {🧰 Tools
🪛 Biome (1.9.4)
[error] 170-170: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
181-181
: Unnecessary optional chaining when assigningscheduleId
As
schedule
is guaranteed to exist within theif
block, the optional chaining is unnecessary when accessingschedule.id
.Simplify the assignment:
-scheduleId = schedule?.id; +scheduleId = schedule.id;
218-218
: EnsurebatchId
is included in the returned run data if necessaryWhile
batchId
is selected in the query, it is not included in the returned run objects. IfbatchId
is relevant to the consumer of this data, consider adding it to the run mapping.packages/trigger-sdk/src/v3/runs.ts (3)
53-53
: Inconsistent Naming BetweensubscribeToBatch
andsubscribeToRunsInBatch
In the
runs
export object,subscribeToBatch
is assigned tosubscribeToRunsInBatch
. For consistency and to avoid confusion, consider using a uniform naming convention throughout the codebase. You might:
- Rename
subscribeToRunsInBatch
tosubscribeToBatch
to align with the export.- Alternatively, export it as
subscribeToRunsInBatch
to match the function name.This ensures clarity and consistency for users of the SDK.
444-450
: Evaluate Necessity of Generic ParameterTTasks
In
subscribeToRunsInBatch<TTasks extends AnyTask>
, the generic typeTTasks
doesn't appear to influence the function's internal logic. Consider:
- Removing the generic parameter if it's unnecessary.
- If type safety for the returned
RunSubscription
is desired, ensure that users can benefit from specifyingTTasks
.Simplifying the function signature can enhance readability and usability without sacrificing type safety.
338-366
: Review JSDoc Comments for Accuracy and ClarityThe JSDoc comments for
subscribeToRun
,subscribeToRunsWithTag
, andsubscribeToRunsInBatch
are comprehensive. To enhance their effectiveness:
- Double-check parameter descriptions and return types for accuracy.
- Ensure the examples reflect actual usage patterns and are free of typos.
- Maintain consistent formatting across all comment blocks.
This will improve developer experience and reduce potential confusion when utilizing these methods.
Also applies to: 377-406, 414-443
apps/webapp/app/v3/marqs/devQueueConsumer.server.ts (1)
Line range hint
493-508
: Handle cases whereversion
may not have two parts to prevent runtime errorsIn the
#getLatestBackgroundWorker()
method, the code assumes that theversion
string will split into at least two parts when usingsplit(".").map(Number)
. If aversion
does not contain a dot or has only one segment, accessingaccParts[1]
orcurrParts[1]
may result inundefined
, leading toNaN
in comparisons or runtime errors.Apply this diff to safely handle versions with fewer than two parts:
return workers.reduce((acc, curr) => { const accParts = acc.version.split(".").map(Number); const currParts = curr.version.split(".").map(Number); + // Ensure the minor version exists; default to 0 if undefined + accParts[1] = accParts[1] ?? 0; + currParts[1] = currParts[1] ?? 0; // Compare the major part if (accParts[0] < currParts[0]) { return curr; } else if (accParts[0] > currParts[0]) { return acc; } // Compare the minor part if (accParts[1] < currParts[1]) { return curr; } else { return acc; } });Alternatively, consider using a version parsing library or implement a more robust comparison function that can handle different version formats to ensure accuracy and prevent potential errors.
packages/core/src/v3/schemas/api.ts (3)
142-156
: Consider Including Run Status inBatchTriggerTaskV2Response
The
runs
array withinBatchTriggerTaskV2Response
provides details likeid
,taskIdentifier
,isCached
, andidempotencyKey
. Including thestatus
of each run could offer immediate insight into the execution state of each task, enhancing the ability to monitor batch processes effectively.
716-718
: ExpandBatchStatus
Enum for Comprehensive CoverageCurrently,
BatchStatus
includes only"PENDING"
and"COMPLETED"
. To fully represent possible batch states, consider adding statuses like"FAILED"
,"IN_PROGRESS"
, or"CANCELED"
. This expansion will improve error handling and provide clearer communication of batch processing stages.
720-727
: AddcompletedAt
Timestamp toRetrieveBatchResponse
Including a
completedAt
field inRetrieveBatchResponse
would allow clients to determine when a batch process finished. This additional timestamp can be valuable for performance monitoring and auditing purposes.packages/core/src/v3/apiClient/index.ts (5)
70-72
: Consider renamingClientTriggerOptions
for clarity.The
ClientTriggerOptions
type currently only contains thespanParentAsLink
property, which may not fully represent its purpose. Consider renaming it to better reflect its usage or grouping it with related types for improved maintainability.
Line range hint
184-211
: Update JWT generation logic to handle optional tags correctly.In the
triggerTask
method, thescopes
array concatenatesArray.from(body.options?.tags)
. Ifbody.options?.tags
isundefined
, this could result in an error. Ensure thatbody.options?.tags
is handled safely when it'sundefined
.Apply this fix to handle the potential
undefined
value:- scopes: [`read:runs:${data.id}`].concat( - body.options?.tags ? Array.from(body.options?.tags).map((t) => `read:tags:${t}`) : [] - ), + scopes: [`read:runs:${data.id}`].concat( + body.options?.tags ? Array.from(body.options.tags).map((t) => `read:tags:${t}`) : [] + ),
258-261
: Handle potential duplicate run IDs in JWT scopes.When generating the JWT scopes,
data.runs.map((r) => \
read:runs:${r.id}`)could include duplicate run IDs if
data.runscontains duplicates. Consider using a
Set` to ensure that each run ID is unique in the scopes array.Apply this optimization:
scopes: [`read:batch:${data.id}`].concat( - data.runs.map((r) => `read:runs:${r.id}`) + Array.from(new Set(data.runs.map((r) => `read:runs:${r.id}`))) ),
686-700
: Simplify additional headers merging logic in#getHeaders
.The current implementation uses
reduce
to merge additional headers, which could be simplified for readability.Consider using
Object.assign
or the spread operator for clearer code.Apply this refactor:
headers: Record<string, string> = { // existing headers ...additionalHeaders, };
100-106
: Export statements can be consolidated for cleaner code.Multiple export statements for types can be combined into a single export to improve code cleanliness.
Apply this refactor:
-export { isRequestOptions }; -export type { - AnyRunShape, - ApiRequestOptions, - RealtimeRun, - AnyRealtimeRun, - RunShape, - RunStreamCallback, - RunSubscription, - TaskRunShape, -}; +export { + isRequestOptions, + AnyRunShape, + ApiRequestOptions, + RealtimeRun, + AnyRealtimeRun, + RunShape, + RunStreamCallback, + RunSubscription, + TaskRunShape, +};references/v3-catalog/src/trigger/batch.ts (1)
9-34
: Consider refactoring to eliminate duplicated code in 'items' arrayThe
items
array construction repeats the same object structure for thepayload
,foo
, andbar
properties. Refactoring to use a helper function to generate these objects can improve code readability and maintainability.Apply this diff to refactor the code:
+ const createItem = (i) => ({ + id: `item${i}`, + name: `Item Name ${i}`, + description: `This is a description for item ${i}`, + value: i, + timestamp: new Date().toISOString(), + }); const items = Array.from({ length: 10 }, (_, i) => ({ payload: { - id: `item${i}`, - name: `Item Name ${i}`, - description: `This is a description for item ${i}`, - value: i, - timestamp: new Date().toISOString(), - foo: { - id: `item${i}`, - name: `Item Name ${i}`, - description: `This is a description for item ${i}`, - value: i, - timestamp: new Date().toISOString(), - }, - bar: { - id: `item${i}`, - name: `Item Name ${i}`, - description: `This is a description for item ${i}`, - value: i, - timestamp: new Date().toISOString(), - }, + ...createItem(i), + foo: createItem(i), + bar: createItem(i), }, options: { idempotencyKey: `item${i}`, }, }));apps/webapp/app/v3/services/triggerTask.server.ts (1)
59-59
: Update or remove the TODO commentYou've implemented idempotency key expiration logic below. Consider updating or removing this TODO comment as it's no longer needed.
packages/core/src/v3/types/tasks.ts (1)
837-852
: Consider Specifying Explicit Types inBatchTasksRunHandle
In
BatchTasksRunHandle
, the last two type parameters ofBrandedRun
are set toany
. For stronger type safety, consider specifying the exact payload and output types instead ofany
.apps/webapp/app/v3/services/batchTriggerV2.server.ts (2)
54-308
: Refactorcall()
method for improved readability and maintainabilityThe
call()
method spans over 250 lines with nested conditions and complex logic. This can make the code difficult to read, understand, and maintain.Consider breaking the
call()
method into smaller, reusable helper methods that handle specific tasks. This will enhance code clarity and facilitate easier testing and debugging.
628-632
: Clarify comments for better understanding of range calculationsThe comments explaining the calculation of the new range may be unclear. For instance, the phrase "the new range will be 25-25" might be confusing.
Consider rephrasing the comments for clarity:
- // for example if the original range was 0-50 and the workingIndex is 25, the new range will be 25-25 + // For example, if the original range was 0-50 and the workingIndex is 25, the new range will start at 25 with a count of 25 (processing items from index 25 to 49)This provides a clearer explanation of how the new range is determined.
apps/webapp/app/components/runs/v3/BatchFilters.tsx (1)
768-780
: Remove unnecessary React fragmentThe fragment (
<>...</>
) is unnecessary here because it wraps a single child element. Removing it can improve code readability.Apply this diff to remove the redundant fragment:
- <> {rangeType === "range" ? ( <span> <DateTime date={fromDate!} includeTime includeSeconds showTimezone /> –{" "} <DateTime date={toDate!} includeTime includeSeconds showTimezone /> </span> ) : rangeType === "from" ? ( <DateTime date={fromDate!} includeTime includeSeconds showTimezone /> ) : ( <DateTime date={toDate!} includeTime includeSeconds showTimezone /> )} - </>🧰 Tools
🪛 Biome (1.9.4)
[error] 768-780: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
packages/trigger-sdk/src/v3/shared.ts (1)
523-607
: IntroducingbatchTriggerById
function for batch processingThe
batchTriggerById
function allows triggering multiple tasks by their identifiers, streamlining batch task execution.Consider refactoring to reduce code duplication across batch trigger functions
The functions
batchTriggerById
,batchTriggerByIdAndWait
,batchTriggerTasks
, andbatchTriggerAndWaitTasks
contain similar logic for processing items and handling API responses. Refactoring common code into helper functions would improve maintainability and reduce redundancy.apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts (1)
1084-1088
: Ensure safe access tobatchItems
when retrieving batch informationTo prevent potential runtime errors, confirm that
taskRun.batchItems
is defined before accessing its elements. While your current check handles empty arrays, adding an explicit null or undefined check fortaskRun.batchItems
enhances robustness.Apply this diff to include the null check:
batch: - taskRun.batchItems[0] && taskRun.batchItems[0].batchTaskRun + taskRun.batchItems && taskRun.batchItems[0] && taskRun.batchItems[0].batchTaskRun ? { id: taskRun.batchItems[0].batchTaskRun.friendlyId } : undefined,apps/webapp/app/components/runs/v3/RunFilters.tsx (2)
1106-1118
: Remove unnecessary React FragmentIn the
AppliedCustomDateRangeFilter
function, the Fragment wrapping the returned component is unnecessary since it contains only one child. Removing it simplifies the code.Apply this diff to remove the Fragment:
- return ( - <> - <YourComponent /> - </> - ); + return ( + <YourComponent /> + );🧰 Tools
🪛 Biome (1.9.4)
[error] 1106-1118: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
1157-1243
: Refactor duplicated ID dropdown components into a reusable componentThe functions
RunIdDropdown
,BatchIdDropdown
, andScheduleIdDropdown
share similar structure and logic. Refactoring them into a single reusable component can reduce code duplication and enhance maintainability.Here's an example of how you might refactor:
function IdDropdown({ idType, idPrefix, idLength, label, placeholder, trigger, clearSearchValue, searchValue, onClose, }: { idType: string; idPrefix: string; idLength: number; label: string; placeholder: string; trigger: ReactNode; clearSearchValue: () => void; searchValue: string; onClose?: () => void; }) { // Shared logic for ID validation and rendering }Then, you can replace the individual dropdowns with this generic component:
<IdDropdown idType="Run" idPrefix="run_" idLength={25} label="Run ID" placeholder="run_" trigger={/* trigger component */} clearSearchValue={clearSearchValue} searchValue={searchValue} onClose={onClose} />Repeat similarly for
BatchId
andScheduleId
with the appropriate parameters.Also applies to: 1275-1361, 1393-1479
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (73)
.changeset/perfect-onions-call.md
(1 hunks).changeset/ten-pans-itch.md
(1 hunks).changeset/wild-needles-hunt.md
(1 hunks).github/copilot-instructions.md
(1 hunks)apps/webapp/app/components/navigation/SideMenu.tsx
(3 hunks)apps/webapp/app/components/primitives/DateField.tsx
(4 hunks)apps/webapp/app/components/primitives/Select.tsx
(2 hunks)apps/webapp/app/components/primitives/Switch.tsx
(1 hunks)apps/webapp/app/components/runs/v3/BatchFilters.tsx
(1 hunks)apps/webapp/app/components/runs/v3/BatchStatus.tsx
(1 hunks)apps/webapp/app/components/runs/v3/RunFilters.tsx
(13 hunks)apps/webapp/app/env.server.ts
(1 hunks)apps/webapp/app/models/taskRun.server.ts
(5 hunks)apps/webapp/app/presenters/v3/ApiRetrieveBatchPresenter.server.ts
(1 hunks)apps/webapp/app/presenters/v3/ApiRunListPresenter.server.ts
(2 hunks)apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
(1 hunks)apps/webapp/app/presenters/v3/RunListPresenter.server.ts
(9 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx
(1 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.runs._index/route.tsx
(3 hunks)apps/webapp/app/routes/api.v1.batches.$batchId.ts
(1 hunks)apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
(4 hunks)apps/webapp/app/routes/api.v1.tasks.batch.ts
(1 hunks)apps/webapp/app/routes/api.v3.runs.$runId.ts
(1 hunks)apps/webapp/app/services/realtimeClient.server.ts
(1 hunks)apps/webapp/app/services/routeBuilders/apiBuilder.server.ts
(2 hunks)apps/webapp/app/services/worker.server.ts
(3 hunks)apps/webapp/app/utils/idempotencyKeys.server.ts
(1 hunks)apps/webapp/app/utils/pathBuilder.ts
(1 hunks)apps/webapp/app/v3/marqs/devQueueConsumer.server.ts
(1 hunks)apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts
(3 hunks)apps/webapp/app/v3/queueSizeLimits.server.ts
(2 hunks)apps/webapp/app/v3/r2.server.ts
(3 hunks)apps/webapp/app/v3/services/batchTriggerTask.server.ts
(1 hunks)apps/webapp/app/v3/services/batchTriggerV2.server.ts
(1 hunks)apps/webapp/app/v3/services/createBackgroundWorker.server.ts
(0 hunks)apps/webapp/app/v3/services/createTaskRunAttempt.server.ts
(1 hunks)apps/webapp/app/v3/services/finalizeTaskRun.server.ts
(3 hunks)apps/webapp/app/v3/services/resumeBatchRun.server.ts
(5 hunks)apps/webapp/app/v3/services/triggerTask.server.ts
(8 hunks)internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241120174034_add_idempotency_key_expires_at_columns/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241121114454_add_payload_columns_to_batch_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241121135006_add_options_to_batch_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241125174424_task_run_project_id_id_desc_index_for_runs_list/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241126094217_add_batch_id_index_to_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241126110327_add_denormalized_task_version_to_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241126110902_add_more_worker_columns_to_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241127153804_add_batch_version_to_batch_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/schema.prisma
(5 hunks)packages/cli-v3/src/dev/backgroundWorker.ts
(1 hunks)packages/cli-v3/src/entryPoints/deploy-run-worker.ts
(7 hunks)packages/cli-v3/src/entryPoints/dev-run-worker.ts
(6 hunks)packages/core/src/v3/apiClient/index.ts
(12 hunks)packages/core/src/v3/apiClient/runStream.ts
(1 hunks)packages/core/src/v3/apiClient/types.ts
(1 hunks)packages/core/src/v3/schemas/api.ts
(3 hunks)packages/core/src/v3/schemas/common.ts
(2 hunks)packages/core/src/v3/schemas/schemas.ts
(0 hunks)packages/core/src/v3/types/tasks.ts
(13 hunks)packages/react-hooks/src/hooks/useApiClient.ts
(2 hunks)packages/react-hooks/src/hooks/useRealtime.ts
(4 hunks)packages/react-hooks/src/hooks/useRun.ts
(1 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(2 hunks)packages/trigger-sdk/package.json
(1 hunks)packages/trigger-sdk/src/v3/batch.ts
(1 hunks)packages/trigger-sdk/src/v3/index.ts
(2 hunks)packages/trigger-sdk/src/v3/runs.ts
(6 hunks)packages/trigger-sdk/src/v3/shared.ts
(30 hunks)packages/trigger-sdk/src/v3/tasks.ts
(2 hunks)references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx
(2 hunks)references/v3-catalog/src/management.ts
(2 hunks)references/v3-catalog/src/trigger/batch.ts
(2 hunks)references/v3-catalog/src/trigger/subtasks.ts
(0 hunks)
💤 Files with no reviewable changes (3)
- apps/webapp/app/v3/services/createBackgroundWorker.server.ts
- packages/core/src/v3/schemas/schemas.ts
- references/v3-catalog/src/trigger/subtasks.ts
✅ Files skipped from review due to trivial changes (6)
- .github/copilot-instructions.md
- internal-packages/database/prisma/migrations/20241121114454_add_payload_columns_to_batch_task_run/migration.sql
- internal-packages/database/prisma/migrations/20241125174424_task_run_project_id_id_desc_index_for_runs_list/migration.sql
- internal-packages/database/prisma/migrations/20241126110327_add_denormalized_task_version_to_task_run/migration.sql
- internal-packages/database/prisma/migrations/20241126110902_add_more_worker_columns_to_task_run/migration.sql
- internal-packages/database/prisma/migrations/20241127153804_add_batch_version_to_batch_task_run/migration.sql
🧰 Additional context used
🪛 LanguageTool
.changeset/perfect-onions-call.md
[grammar] ~7-~7: Possible subject-verb agreement error detected.
Context: ...er and batch.triggerByTask methods that allows triggering multiple different tasks in ...
(PLURAL_THAT_AGREEMENT)
[style] ~28-~28: This phrase is redundant. Consider using “inside”.
Context: ...or(run.error); } } ``` Or if you are inside of a task, you can use triggerByTask
: `...
(OUTSIDE_OF)
.changeset/ten-pans-itch.md
[formatting] ~25-~25: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...riggerAndWaitand
batchTriggerAndWait`, because it can lead to permanently frozen runs ...
(COMMA_BEFORE_BECAUSE)
.changeset/wild-needles-hunt.md
[uncategorized] ~7-~7: You might be missing the article “the” here.
Context: ...h "@trigger.dev/core": patch --- Added ability to subscribe to a batch of runs using r...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
[error] 768-780: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 916-916: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
apps/webapp/app/components/runs/v3/RunFilters.tsx
[error] 1106-1118: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 1516-1516: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts
[error] 170-170: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🪛 Gitleaks (8.21.2)
references/v3-catalog/src/trigger/batch.ts
491-491: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
492-492: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
515-515: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
516-516: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🔇 Additional comments (154)
internal-packages/database/prisma/migrations/20241121135006_add_options_to_batch_task_run/migration.sql (2)
1-2
: LGTM! The migration is safe and appropriate.
Adding a nullable JSONB column is a safe operation that won't affect existing records.
1-2
: Consider adding schema validation for the options field.
Since this JSONB column will store batch task configuration, it might be beneficial to:
- Document the expected structure of the options object
- Implement validation at the application level to ensure data consistency
- Consider using PostgreSQL check constraints if there are invariants that must be maintained
Let's check if there are any TypeScript types or validation schemas defined for this column:
.changeset/wild-needles-hunt.md (1)
1-5
: LGTM on version bumps!
The patch version increments are appropriate for these backward-compatible feature additions across the interdependent packages.
internal-packages/database/prisma/migrations/20241120174034_add_idempotency_key_expires_at_columns/migration.sql (1)
1-5
: LGTM! The column additions look good.
The migration safely adds nullable TIMESTAMP(3) columns with millisecond precision to both tables, which is appropriate for storing expiration timestamps.
internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql (3)
7-8
: Verify index usage before removal
The removal of the composite index on (runtimeEnvironmentId, taskIdentifier, idempotencyKey)
might impact existing queries.
Run this script to check for potential query impacts:
✅ Verification successful
Based on the analysis of the code, I can now generate the final response:
Index removal is safe - new unique constraint exists
The index being dropped (BatchTaskRun_runtimeEnvironmentId_taskIdentifier_idempotenc_key
) was replaced by a simpler unique constraint @@unique([runtimeEnvironmentId, idempotencyKey])
in the schema, which is sufficient for the application's needs. The code in triggerTask.server.ts
confirms that queries are using this new constraint through the prisma client.
- The existing query in
triggerTask.server.ts
usesruntimeEnvironmentId_taskIdentifier_idempotencyKey
which is properly handled by the new unique constraint - The schema shows the new constraint is already in place:
@@unique([runtimeEnvironmentId, idempotencyKey])
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for queries that might be using the old index
# Expected result: Identify any code that might need optimization
# Search for any usage of these columns in combination
rg -l 'taskIdentifier.*idempotencyKey|idempotencyKey.*taskIdentifier'
Length of output: 284
Script:
#!/bin/bash
# Let's check the actual usage in these files to understand the context better
# Check schema definition
echo "=== Schema Definition ==="
cat internal-packages/database/prisma/schema.prisma | grep -A 10 -B 10 'BatchTaskRun'
# Check the migration that initially added this constraint
echo -e "\n=== Previous Migration ==="
cat internal-packages/database/prisma/migrations/20240704170301_scope_idempotency_key_to_task_identifier/migration.sql
# Check the actual usage in the service
echo -e "\n=== Service Usage ==="
cat apps/webapp/app/v3/services/triggerTask.server.ts
Length of output: 33497
1-6
:
Verify data integrity before applying migration
The warning indicates this migration will fail if duplicate values exist in the runtimeEnvironmentId
and idempotencyKey
columns.
Run this script to check for potential conflicts:
✅ Verification successful
Migration changes are safe and properly sequenced
The migration is safe to proceed because:
- It's dropping a more restrictive unique constraint (runtimeEnvironmentId + taskIdentifier + idempotencyKey) and replacing it with a less restrictive one (runtimeEnvironmentId + idempotencyKey)
- The sequence is correct: first dropping the old index, then altering the table, and finally creating the new index
- Making taskIdentifier nullable and removing it from the unique constraint is intentional, as evidenced by the migration history
The warning is standard for unique constraint creation but doesn't pose a risk in this case since we're actually relaxing the constraint by removing taskIdentifier from it.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for duplicate values that would violate the new unique constraint
# Expected result: No output means no conflicts
cat << 'EOF' > verify_duplicates.sql
SELECT "runtimeEnvironmentId", "idempotencyKey", COUNT(*)
FROM "BatchTaskRun"
GROUP BY "runtimeEnvironmentId", "idempotencyKey"
HAVING COUNT(*) > 1;
EOF
# Search for any SQL files that might contain the query
fd -e sql -x grep -l "BatchTaskRun" {} \;
Length of output: 1911
Script:
#!/bin/bash
# Let's analyze the migration history to understand the evolution of the BatchTaskRun table
# and its constraints
# Check the previous migration that might have introduced or modified these columns
cat ./internal-packages/database/prisma/migrations/20240704170301_scope_idempotency_key_to_task_identifier/migration.sql
# Also check the current migration in full to understand all changes
cat ./internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql
# Let's also check if there are any related model definitions
fd -e prisma | xargs grep -l "BatchTaskRun"
Length of output: 1919
10-13
: Consider data migration strategy for new columns
The changes look good, but there are a few considerations:
runCount
defaults to 0 - ensure this aligns with existing records' actual run countsrunIds
as TEXT[] suggests these are string identifiers - confirm this is the intended type- Making
taskIdentifier
nullable is a significant change that might affect existing code
Run this script to check for potential impacts:
apps/webapp/app/routes/api.v1.batches.$batchId.ts (3)
1-5
: LGTM! Dependencies are well-chosen and appropriate.
The imports are clean and purposeful, using standard Remix utilities and established validation libraries.
10-20
: LGTM! Robust security configuration.
The route is well-secured with:
- JWT authentication
- Explicit authorization rules
- Appropriate super scopes
22-24
: Verify handling of large batch responses.
Since batches can contain up to 500 runs, ensure that the presenter implements appropriate pagination or response size limits to prevent performance issues with large batches.
apps/webapp/app/utils/idempotencyKeys.server.ts (1)
1-41
: Verify consistent usage across the codebase
Let's verify that this function is properly integrated with the batch triggering system and API routes.
✅ Verification successful
Function is properly integrated and consistently used
The verification shows the function is correctly integrated across the codebase:
- Used in task triggering service (
triggerTask.server.ts
) - Properly integrated in API routes (
api.v1.tasks.batch.ts
andapi.v1.tasks.$taskId.trigger.ts
) - Consistent usage pattern with database schema (
idempotencyKeyExpiresAt
) - Default 24-hour fallback is consistently applied when TTL is not provided
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of resolveIdempotencyKeyTTL
echo "Searching for function usage..."
rg "resolveIdempotencyKeyTTL" -A 3
# Search for idempotencyKeyTTL in API routes
echo "Checking API route integration..."
rg "idempotencyKeyTTL" --type ts "apps/webapp/app/routes/"
# Search for related database columns
echo "Verifying database schema..."
rg "idempotencyKeyExpiresAt" --type ts
Length of output: 5751
apps/webapp/app/presenters/v3/ApiRetrieveBatchPresenter.server.ts (1)
1-5
: LGTM! Clean imports and proper class structure.
The imports are specific and the class follows the presenter pattern correctly.
apps/webapp/app/v3/queueSizeLimits.server.ts (2)
13-14
: LGTM! Well-designed parameter addition.
The new itemsToAdd
parameter is thoughtfully implemented with:
- Type safety through explicit number typing
- Backward compatibility via default value
- Clear parameter naming that reflects its purpose
30-33
: Verify batch size limits across the application.
The updated limit check correctly uses the projected size. However, let's verify that this aligns with any batch size constants defined elsewhere in the application.
✅ Verification successful
Let me search for the specific queue size limits to verify alignment.
The queue size limit check is correctly implemented and consistent with the codebase
The updated limit check in queueSizeLimits.server.ts
correctly uses projectedSize <= maximumSize
and is properly integrated with:
- Environment-specific queue size limits (
MAXIMUM_DEV_QUEUE_SIZE
,MAXIMUM_DEPLOYED_QUEUE_SIZE
) - Organization-level queue size configurations (
maximumDevQueueSize
,maximumDeployedQueueSize
) - Batch processing services that rely on these limits (
batchTriggerV2.server.ts
,triggerTask.server.ts
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for batch size related constants and limits
# Look for relevant constants and configurations
rg -i "batch.*size|maximum.*batch|batch.*limit" --type ts
Length of output: 7270
Script:
#!/bin/bash
# Search for queue size and related limits
rg -i "queue.*size|maximum.*size|queue.*limit" --type ts
Length of output: 16916
packages/trigger-sdk/src/v3/tasks.ts (2)
Line range hint 1-89
: Well-structured exports and documentation
The file maintains a clean separation between imports, type exports, and function exports. The JSDoc documentation for the task
function is comprehensive and includes helpful examples.
27-27
: Verify the usage of new types across the codebase
The addition of TriggerOptions
and TaskFromIdentifier
types aligns with the batch processing objectives. Let's verify their usage to ensure consistency.
Also applies to: 29-29, 44-44, 46-46
✅ Verification successful
Let me gather more context about the type definitions and their usage patterns.
Based on the results, I can now generate the final response. The search results show that TriggerOptions
and TaskFromIdentifier
are properly defined types that are used consistently across the codebase:
Types are correctly integrated and used across the codebase
The types are properly integrated into the SDK's type system:
TriggerOptions
is used in task triggering operations across multiple packagesTaskFromIdentifier
is used for type inference and task identification- Both types are exported from shared modules and used consistently
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of new types across the codebase
# Expected: Find usages of new types in batch-related functionality
# Check for TriggerOptions usage
echo "Checking TriggerOptions usage:"
ast-grep --pattern 'TriggerOptions'
# Check for TaskFromIdentifier usage
echo "Checking TaskFromIdentifier usage:"
ast-grep --pattern 'TaskFromIdentifier'
# Look for any remaining TaskRunOptions (old type) usage
echo "Checking for any remaining TaskRunOptions usage:"
rg "TaskRunOptions"
Length of output: 907
Script:
#!/bin/bash
# Check the definition and usage context of TriggerOptions
echo "Checking TriggerOptions definition and usage context:"
rg -A 5 "TriggerOptions" packages/trigger-sdk/src/v3/shared.ts
# Check the definition and usage context of TaskFromIdentifier
echo "Checking TaskFromIdentifier definition and usage context:"
rg -A 5 "TaskFromIdentifier" packages/trigger-sdk/src/v3/shared.ts
# Check for any references to task running or triggering in the codebase
echo "Checking task running/triggering references:"
rg "type.*(?i)task.*run|type.*(?i)trigger" -A 3
Length of output: 128930
packages/core/src/v3/apiClient/types.ts (1)
34-34
: Verify the batch property integration.
The addition of the batch
property looks good and aligns with the batch processing enhancements. Let's verify its usage across the codebase.
✅ Verification successful
The batch property is properly integrated across the codebase
The search results confirm that the batch
property is well-integrated:
- It's properly typed in both
packages/core/src/v3/apiClient/types.ts
andpackages/trigger-sdk/src/v3/auth.ts
- It's actively used in batch processing functionality:
- Used in run queries and filtering (
references/v3-catalog/src/management.ts
) - Integrated with authentication scopes (
packages/core/src/v3/apiClient/index.ts
) - Implemented in batch task runs and API endpoints (
apps/webapp/app/v3/services/batchTriggerV2.server.ts
) - Properly handled in API routes and realtime endpoints
- Used in run queries and filtering (
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the integration of the batch property in related components
# Check for batch-related query parameter handling in API routes
rg -l "batch.*ListRunsQueryParams"
# Look for batch filtering implementation in run queries
ast-grep --pattern 'batch = $params.batch'
# Check for batch property usage in API calls
rg -A 3 'ListRunsQueryParams.*batch'
Length of output: 120
Script:
#!/bin/bash
# Let's try a broader search to understand the batch property usage
# Search for batch property usage in type definitions
rg -A 3 'batch\?:'
# Search for batch parameter in API calls and implementations
rg -A 3 'batch:'
# Look for any batch-related query parameters
rg -A 3 'batch.*params'
# Search for batch in function parameters or method signatures
ast-grep --pattern 'function $_($$$, batch: $_)'
Length of output: 10706
packages/trigger-sdk/src/v3/index.ts (2)
44-44
: LGTM! Verify the AnyRealtimeRun type definition.
The AnyRealtimeRun type export is properly positioned within the runs-related type exports and maintains consistent ordering.
Let's verify the type definition:
✅ Verification successful
Type definition verified and properly used across the codebase
The AnyRealtimeRun
type is well-defined in packages/core/src/v3/apiClient/runStream.ts
as export type AnyRealtimeRun = RealtimeRun<AnyTask>
and is consistently used across the codebase. It's properly exported through the SDK's public API in packages/trigger-sdk/src/v3/index.ts
and is being used correctly in client code, such as in the batch processing implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the AnyRealtimeRun type definition
# Expected: Find type definition and its relationship with RealtimeRun
ast-grep --pattern 'type AnyRealtimeRun' ./packages/trigger-sdk/src/v3/runs.js
Length of output: 165
Script:
#!/bin/bash
# Let's search for the type definition in TypeScript files
fd -e ts -e tsx | xargs ast-grep --pattern 'type AnyRealtimeRun'
# Also search for any references to this type
rg "AnyRealtimeRun" -A 2 -B 2
Length of output: 2713
6-6
: LGTM! Verify the batch module implementation.
The export statement for the batch module follows the established pattern and is positioned logically among other feature exports.
Let's verify the batch module implementation:
✅ Verification successful
The batch module is properly implemented and exported
The batch.ts module is well-structured and exports a comprehensive batch API object that:
- Re-exports core batch operations from shared.js (
trigger
,triggerAndWait
,triggerByTask
,triggerByTaskAndWait
) - Includes a
retrieve
function for fetching batch status - Follows consistent patterns with proper TypeScript types and error handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the batch module exists and exports expected methods
# Expected: Find implementations of batch.trigger, batch.triggerByTask, and batch.triggerByTaskAndWait
# Check if batch.js exists and contains the expected methods
ast-grep --pattern 'export function trigger' ./packages/trigger-sdk/src/v3/batch.js
ast-grep --pattern 'export function triggerByTask' ./packages/trigger-sdk/src/v3/batch.js
ast-grep --pattern 'export function triggerByTaskAndWait' ./packages/trigger-sdk/src/v3/batch.js
Length of output: 534
Script:
#!/bin/bash
# Let's first check if the file exists with .ts extension instead of .js
fd "batch.ts" packages/trigger-sdk/src/v3
# If file exists, let's check its content
rg "export function" -A 2 packages/trigger-sdk/src/v3/batch.ts
# Let's also check if there are any batch-related functions in the codebase
rg "export (async )?function.*batch" packages/trigger-sdk/src/v3
Length of output: 855
Script:
#!/bin/bash
# Let's check the content of batch.ts to understand what it exports
cat packages/trigger-sdk/src/v3/batch.ts
# Also check if it re-exports functions from shared.ts
rg "export.*from.*shared" packages/trigger-sdk/src/v3/batch.ts
Length of output: 1291
packages/trigger-sdk/src/v3/batch.ts (2)
1-15
: LGTM! Well-organized imports
The imports are cleanly organized and all are utilized within the module.
10-14
: Verify consistent error handling in batch operations
Let's verify that the imported batch trigger functions maintain consistent error handling patterns.
✅ Verification successful
Based on the examination of the code, I can now provide a final response:
Batch operations have robust error handling in place
The batch operations in the codebase demonstrate consistent error handling patterns:
- All batch operations properly handle errors through try-catch blocks
- Errors are propagated with proper typing and context
- Error handling includes:
- Payload validation errors
- API request errors
- Task execution errors
- Object store errors
- Rate limiting errors
The imported batch functions from shared.js
are well-implemented with proper error handling throughout the execution chain, including:
- Error handling during batch triggering
- Error handling during batch execution
- Error handling during result processing
- Proper propagation of errors to the caller
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check error handling patterns in batch trigger implementations
# Look for error handling patterns in the implementation
ast-grep --pattern 'try {
$$$
} catch (error) {
$$$
}'
# Look for throw statements to understand error cases
rg "throw new" -A 2
Length of output: 102803
Script:
#!/bin/bash
# Let's check the implementation of batch trigger functions in shared.js
# First, find the shared.js file
fd shared.js
# Then examine the batch trigger implementations
ast-grep --pattern 'export async function batchTrigger$_($$$) {
$$$
}'
# Also check for error handling in batch operations
rg "catch.*error.*batch" -A 3
Length of output: 121
Script:
#!/bin/bash
# Let's try a different approach to find batch operations and their error handling
# Look for batch trigger functions
rg "export.*function.*batch" -A 5
# Look for batch-related error handling patterns
rg "catch.*\{" --type ts -A 5 packages/trigger-sdk/src/v3/
# Look for batch-related files
fd -e ts -e js batch
Length of output: 9716
Script:
#!/bin/bash
# Let's examine the batch.ts file and its error handling implementation
cat packages/trigger-sdk/src/v3/batch.ts
# Also check the shared.ts file for batch operations implementation
cat packages/trigger-sdk/src/v3/shared.ts
Length of output: 36959
.changeset/ten-pans-itch.md (2)
1-5
: LGTM! Version bumps are appropriate
Minor version bumps align with semver for new feature additions while maintaining backward compatibility.
9-11
: Consider enhancing the documentation
While the feature announcement is clear, consider adding:
- Timeline for the upcoming SDK support for multiple tasks
- Guidelines for handling rate limits and errors when approaching the 500 runs limit
- Best practices for batching strategies
Let's check if there's any existing documentation about rate limits:
packages/react-hooks/src/hooks/useApiClient.ts (2)
16-22
: LGTM! Well-documented type definition.
The new enabled
option is a clean addition with clear documentation explaining its purpose.
52-54
: Verify impact on existing consumers.
The conditional logic is well-implemented. However, since this is a breaking change in error handling behavior, we should verify its impact.
✅ Verification successful
No breaking changes detected, implementation is safe
The verification shows that the change is safe because:
- The hook's return type was already
ApiClient | undefined
before this change - All consuming hooks (
useTaskTrigger
,useRun
,useRealtime
) properly handle the undefined case as they're using the client within async functions - The new behavior (returning undefined when
enabled: false
) is explicitly documented in the hook's type definition - This change actually makes the hook more flexible by providing a graceful way to handle initialization scenarios without throwing errors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for direct usage of useApiClient to identify potentially affected code
rg -t typescript -t javascript "useApiClient\(" --no-heading -A 2
# Search for potential type errors where the return value is used without checking for undefined
ast-grep --pattern 'const $var = useApiClient($$$)'
Length of output: 156
Script:
#!/bin/bash
# Search for direct usage of useApiClient with proper file extensions
rg -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx" "useApiClient\(" --no-heading -A 2
# Search for potential type errors where the return value is used without checking for undefined
ast-grep --pattern 'const $_ = useApiClient($$$)'
# Look for test files that might be affected
rg -g "*test*.ts" -g "*test*.tsx" -g "*spec*.ts" -g "*spec*.tsx" "useApiClient" --no-heading
# Check the hook's implementation for better context
cat packages/react-hooks/src/hooks/useApiClient.ts
Length of output: 4895
references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx (2)
7-7
: LGTM: Import statement is correctly added
The import statement for the required React hooks is properly placed.
16-25
: 🛠️ Refactor suggestion
Verify the necessity of delayed token initialization
The 2-second delay in setting the access token raises several concerns:
- It introduces an artificial delay in data fetching
- It may impact the user experience, especially in batch processing scenarios
- It could cause unnecessary re-renders
Let's check if this delay pattern is used elsewhere or if there's a documented reason:
Consider removing the delay unless there's a specific requirement. Here's a suggested implementation:
- const [accessToken, setAccessToken] = useState<string | undefined>(undefined);
-
- // call setAccessToken with publicAccessToken after 2 seconds
- useEffect(() => {
- const timeout = setTimeout(() => {
- setAccessToken(publicAccessToken);
- }, 2000);
-
- return () => clearTimeout(timeout);
- }, [publicAccessToken]);
+ const [accessToken, setAccessToken] = useState<string | undefined>(publicAccessToken);
+
+ useEffect(() => {
+ setAccessToken(publicAccessToken);
+ }, [publicAccessToken]);
packages/react-hooks/src/hooks/useRun.ts (2)
39-41
: LGTM: Good error handling for missing access token!
The explicit check for apiClient
with a clear error message improves the developer experience.
36-56
: Verify batch processing support.
Given that this PR introduces batch processing capabilities, please verify if this hook needs to handle any batch-specific states or error conditions.
✅ Verification successful
Let me gather more specific information about the batch processing implementation in the context of runs.
Let me try one final verification to check the run-related types and implementations:
Based on the comprehensive analysis of the codebase, I can now provide the final response:
The hook correctly handles batch processing scenarios
The useRun
hook implementation is consistent with the core types and doesn't need batch-specific states because:
- The
RetrieveRunResult
type from@trigger.dev/core/v3
already includes all necessary run states, including batch-related information - The hook's error handling and loading states are generic enough to handle both single and batch runs
- The refresh interval logic appropriately handles completion states regardless of whether it's a batch or single run
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for batch-related types and implementations
# Check for batch-related types in core
rg -t typescript "type.*Batch" packages/core
# Check for batch-related error handling
rg -t typescript "BatchError|batch.*error" packages/core
# Look for other implementations of batch processing
ast-grep --pattern 'batch.$$$($$$)'
Length of output: 2297
Script:
#!/bin/bash
# Check for batch-related types and implementations in the react-hooks package
rg -g "*.ts" "type.*Batch|interface.*Batch" packages/react-hooks
# Look for batch-related run implementations
ast-grep --pattern 'retrieveRun($$$)'
# Check for batch-related hooks
rg -g "*.ts" "useBatch|BatchRun" packages/react-hooks
Length of output: 162
Script:
#!/bin/bash
# Check for run-related types and implementations
rg -g "*.ts" "interface.*Run|type.*Run" packages/core packages/react-hooks
# Look for the RetrieveRunResult type usage
rg -g "*.ts" "RetrieveRunResult" packages/core packages/react-hooks
# Check the implementation of retrieveRun in the API client
ast-grep --pattern 'retrieveRun($$$) {
$$$
}'
Length of output: 16949
.changeset/perfect-onions-call.md (1)
47-59
: Add timeout handling to triggerByTaskAndWait
The triggerByTaskAndWait
method could potentially wait indefinitely. Consider adding a timeout parameter and handling timeout scenarios.
apps/webapp/app/components/runs/v3/BatchStatus.tsx (1)
16-18
: LGTM! Clean and type-safe implementation.
The utility function is well-typed and serves its purpose effectively.
packages/trigger-sdk/package.json (2)
Line range hint 1-112
: Verify the dual-format module resolution
The package now supports both ESM and CommonJS formats with complex export mappings. Please ensure this works correctly across different environments and bundlers.
✅ Verification successful
Based on the results, I can now generate a final response. The extensive usage of @trigger.dev/sdk
across the codebase, particularly in test files and example implementations, shows that both ESM and CommonJS imports are being actively used and tested. The exports configuration in package.json is well-structured to support both module systems, with proper path mappings for both the main package and the v3 subpath exports.
Dual-format module resolution is properly configured
The package.json exports field correctly handles:
- ESM and CommonJS resolution paths
- Type definitions for both formats
- Source file mappings for development
- Subpath exports for v3
The extensive usage across test files, examples, and real implementations demonstrates that the dual-format support is working as intended. The package is being successfully imported using both import
and require
statements throughout the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the package's exports configuration and module resolution
# Test both ESM and CommonJS resolution paths
# Check if the package can be imported in both ESM and CommonJS contexts
cat > test-esm.mjs << EOL
import * as sdk from '@trigger.dev/sdk';
import * as v3 from '@trigger.dev/sdk/v3';
console.log('ESM import successful');
EOL
cat > test-cjs.cjs << EOL
const sdk = require('@trigger.dev/sdk');
const v3 = require('@trigger.dev/sdk/v3');
console.log('CommonJS require successful');
EOL
# Use package-support to analyze exports field
npx package-support --analyze ./package.json
Length of output: 153
Script:
#!/bin/bash
# Let's verify the exports configuration using different approaches
# 1. Check if the source files referenced in exports exist
fd -t f "index.ts" src/
# 2. Check the structure of dist directory
fd -t f . dist/ || echo "dist directory not found or empty"
# 3. Examine the tshy configuration and related files
cat tsconfig.build.json || echo "tsconfig.build.json not found"
# 4. Look for any import/require statements in the codebase that use this package
rg "@trigger.dev/sdk" --type ts -A 2
# 5. Check if there are any test files that verify the imports
fd -t f "test|spec" . -e ts -e js
Length of output: 26095
Line range hint 89-111
: Ensure consistent module resolution across environments
The exports field configuration looks thorough, covering both ESM and CommonJS use cases with proper type definitions. However, the complexity warrants thorough testing.
apps/webapp/app/models/taskRun.server.ts (2)
Line range hint 114-143
: Function is well-implemented with exhaustive status handling.
The status mapping is comprehensive and type-safe.
52-52
: LGTM! Consistent enhancement of execution results with task identification.
The addition of taskIdentifier
across all execution result types improves task traceability in batch operations.
Let's verify the usage of this new field:
Also applies to: 63-63, 96-96, 107-107
✅ Verification successful
LGTM! The taskIdentifier field is properly integrated and consumed across the codebase.
The search results confirm that taskIdentifier
is:
- Used consistently in batch processing logic (references/v3-catalog/src/trigger/batch.ts)
- Properly defined in database schema (internal-packages/database/prisma/schema.prisma)
- Integrated in task run models and APIs
- Consumed by UI components for display and filtering
- Used in task scheduling and execution workflows
- Included in monitoring and alerting systems
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if taskIdentifier is properly consumed by batch processing code
# Expected: Find references to taskIdentifier in batch processing logic
# Search for taskIdentifier usage in batch-related files
rg -t typescript -t javascript "taskIdentifier" --glob "**/*batch*"
# Search for any potential undefined checks for taskIdentifier
ast-grep --pattern 'taskIdentifier === undefined'
Length of output: 154
Script:
#!/bin/bash
# Let's try a broader search without file type restrictions
# Search for taskIdentifier usage across the codebase
rg "taskIdentifier" -A 2 -B 2
# Also search for batch-related files to understand the context
fd "batch" --type f
Length of output: 131122
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (2)
9-9
: LGTM: Schema changes for idempotency key TTL
The schema changes properly handle the optional TTL header using appropriate zod types.
Also applies to: 19-19
Line range hint 1-141
: Verify batch processing compatibility
Since this PR focuses on batch trigger upgrades, let's verify that this endpoint properly handles batch-related scenarios.
✅ Verification successful
Let me gather more specific information about batch-related configurations and headers.
Based on the analysis of the code, I can now provide a final response regarding the batch processing compatibility.
Batch processing is properly handled - no changes needed
The endpoint under review (api.v1.tasks.$taskId.trigger.ts
) correctly handles all necessary headers and parameters that are also used in batch processing:
- It properly handles idempotency keys and TTL
- It includes trace context handling which is essential for batch operations
- It shares the same core schemas and types with batch endpoints
- The headers schema (
HeadersSchema
) includes all necessary fields for both single and batch operations
Key evidence from the codebase:
- The
TriggerTaskRequestBody
schema is the base forBatchTriggerTaskRequestBody
- Both single and batch triggers use the same header handling for idempotency, tracing, and versioning
- The
TriggerTaskService
is used internally byBatchTriggerTaskService
, ensuring consistent behavior
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for batch-related configurations and usages
# Check for batch-related configurations in the codebase
echo "Checking for batch-related configurations..."
rg -l "batch" --type ts
# Check for any batch size limits or related constants
echo "Checking for batch size limits..."
rg -l "BATCH_SIZE|MAX_BATCH" --type ts
# Check for other trigger endpoints that might need similar TTL handling
echo "Checking for other trigger endpoints..."
fd "trigger.ts$"
Length of output: 5968
Script:
#!/bin/bash
# Check the batch trigger endpoint implementation
echo "Checking batch trigger endpoint..."
cat apps/webapp/app/routes/api.v1.tasks.$taskId.batch.ts
# Check batch-related schemas and types
echo "Checking batch-related schemas..."
cat packages/core/src/v3/schemas/api.ts
# Check batch trigger service
echo "Checking batch trigger service..."
cat apps/webapp/app/v3/services/batchTriggerTask.server.ts
Length of output: 27336
apps/webapp/app/v3/services/batchTriggerTask.server.ts (1)
29-33
:
Breaking Change: Modified uniqueness constraint for batch task runs
The removal of taskIdentifier
from the composite key means that idempotency keys must now be unique across all tasks within an environment, not just per task. This could break existing code that relies on task-specific idempotency keys.
Let's verify if there are any existing batch runs that would conflict under the new constraint:
✅ Verification successful
Let me gather more information about the idempotency key usage and constraints:
Let me search for the actual usage of idempotency keys in batch tasks:
Based on the code analysis, I can now provide a final response regarding the idempotency key change:
The idempotency key change is safe and correctly implemented
The removal of taskIdentifier
from the composite key is intentional and safe because:
- The
BatchTaskRun
model has a unique constraint on(runtimeEnvironmentId, idempotencyKey)
which is correctly enforced at the database level - The new implementation in
BatchTriggerTaskService
andBatchTriggerV2Service
properly handles idempotency by checking for existing batch runs using this constraint - The migration history shows this was a deliberate design change to support the new v3 batch functionality
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for any code that might be affected by the idempotency key change
# Look for patterns where the same idempotencyKey is used with different tasks
# Search for batch.trigger calls with idempotency keys
rg -A 5 'batch\.trigger.*idempotencyKey'
# Search for BatchTriggerTaskService usage
rg -A 5 'new BatchTriggerTaskService'
Length of output: 629
Script:
#!/bin/bash
# Search for schema definitions and migrations related to batch tasks
fd -e prisma | xargs cat
# Search for tests that might reveal idempotency key usage patterns
rg -l "idempotencyKey.*test" | xargs cat
# Look for any documentation about idempotency keys
fd "README|CONTRIBUTING" -e md | xargs cat
Length of output: 119779
Script:
#!/bin/bash
# Search for batch trigger task implementations and tests
rg -A 10 'batchTriggerTask|BatchTriggerTaskService'
# Search for any code that uses idempotency keys with batch tasks
rg -A 5 'batch.*idempotencyKey'
# Look for any batch task related migrations
rg -l 'batch.*task' | grep -i 'migration' | xargs cat
Length of output: 25941
packages/react-hooks/src/hooks/useTaskTrigger.ts (3)
67-67
: LGTM: Type update aligns with SDK changes
The type change from TaskRunOptions
to TriggerOptions
maintains compatibility with the broader SDK updates.
Line range hint 144-187
: LGTM: Changes properly integrate with dependent hooks
The modifications to useTaskTrigger
maintain compatibility with useRealtimeTaskTrigger
and useRealtimeTaskTriggerWithStreams
. The improved error handling benefits all dependent hooks.
Also applies to: 189-223
11-11
: Verify v3 compatibility across the codebase
The import from "@trigger.dev/core/v3" suggests this is part of a v3 upgrade. Ensure all related dependencies and imports are consistently using v3.
✅ Verification successful
No mixed versions detected - all imports consistently use v3
Based on the comprehensive search results, all imports from @trigger.dev/core
across the codebase consistently use the v3 version. The type renaming from TaskRunOptions
to TriggerOptions
is part of this v3 version, and there are no v2 imports that would cause compatibility issues.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for mixed version imports across the codebase
rg --type typescript "from\s+['\"]@trigger\.dev\/core\/(v[23])" -A 1
Length of output: 202
Script:
#!/bin/bash
# Check for mixed version imports across the codebase, fixing the regex escape
rg --type typescript "from ['\"]@trigger\.dev/core/(v[23])"
Length of output: 98
Script:
#!/bin/bash
# Check for mixed version imports across the codebase using file extensions
rg "from ['\"]@trigger\.dev/core/(v[23])" -g "*.ts" -g "*.tsx"
Length of output: 29481
references/v3-catalog/src/management.ts (2)
1-1
: LGTM!
The addition of the batch
import is appropriate for the new batch processing functionality.
277-277
: LGTM!
The function invocation follows the established pattern in the file, with appropriate error handling.
apps/webapp/app/components/primitives/DateField.tsx (2)
15-15
: LGTM: Button variant change.
The change from "minimal/small" to "tertiary/small" for the clear button variant maintains consistency with the now button's styling.
143-143
: LGTM: Button text simplification.
The simplified button text improves consistency with the Clear button and reduces unnecessary complexity.
apps/webapp/app/v3/services/createTaskRunAttempt.server.ts (1)
75-79
: LGTM! Good optimization of the database query.
The change to select only the friendlyId
field from batchTaskRun
is a good optimization since that's the only field needed when constructing the execution object. This reduces the amount of data fetched from the database.
Let's verify this optimization by checking the usage of batchTaskRun
data:
✅ Verification successful
Optimization verified - only friendlyId is used from batchTaskRun
The verification confirms that batchTaskRun
data is only used to access the friendlyId
field when constructing the execution object (taskRun.batchItems[0].batchTaskRun.friendlyId
). Therefore, the optimization to select only the friendlyId
field in the database query is appropriate and efficient.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that only friendlyId is used from batchTaskRun
# Expected: Only friendlyId references in the execution object construction
# Search for batchTaskRun usage in the file
rg -A 5 "batchTaskRun" apps/webapp/app/v3/services/createTaskRunAttempt.server.ts
Length of output: 447
apps/webapp/app/presenters/v3/ApiRunListPresenter.server.ts (2)
122-122
: LGTM: Schema addition follows established patterns
The new batch filter parameter is correctly implemented as an optional string, maintaining consistency with the existing filter schema structure.
213-215
: Verify RunListOptions type and consider input validation
While the implementation is straightforward, consider:
- Verifying that
batchId
is properly defined in theRunListOptions
type - Adding validation for the batch ID format if there are any specific requirements
Let's verify the RunListOptions type definition:
✅ Verification successful
batchId
property is correctly defined in RunListOptions type
The batchId
property is properly defined as an optional string property in the RunListOptions
type at apps/webapp/app/presenters/v3/RunListPresenter.server.ts:26
. The implementation aligns with the type definition, and no issues were found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for RunListOptions type definition
ast-grep --pattern 'type RunListOptions = {
$$$
batchId?: $_
$$$
}'
# Alternatively, look for the interface definition
ast-grep --pattern 'interface RunListOptions {
$$$
batchId?: $_
$$$
}'
Length of output: 1992
packages/core/src/v3/apiClient/runStream.ts (1)
46-46
: LGTM! The new type enhances type safety for batch operations.
The addition of AnyRealtimeRun
type is a good abstraction that will help handle real-time runs of any task type, which is particularly useful for the new batch processing features.
Let's verify how this new type is being used in the codebase:
✅ Verification successful
The new AnyRealtimeRun
type is being used effectively in batch processing
The verification shows that AnyRealtimeRun
is:
- Properly exported and re-exported through the SDK packages
- Actively used in batch processing code, particularly in
references/v3-catalog/src/trigger/batch.ts
where it's used to track runs in a Map - Integrated with batch operations like
batch.trigger()
,batch.triggerByTask()
, and their wait variants
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of AnyRealtimeRun type
# Expected: Find imports and usage of this type in batch processing related code
# Search for direct references to AnyRealtimeRun
echo "Direct references to AnyRealtimeRun:"
rg "AnyRealtimeRun"
# Search for potential batch processing code that might benefit from this type
echo "\nPotential batch processing code:"
rg "batch\.(trigger|triggerByTask)"
Length of output: 1662
packages/cli-v3/src/entryPoints/dev-run-worker.ts (3)
197-197
: LGTM: Enhanced error tracing with task identification
The addition of taskIdentifier
to the error response improves traceability, particularly useful in batch processing scenarios.
355-355
: LGTM: Consistent task identification in responses
The addition of taskIdentifier
to success responses maintains consistency with error responses and supports batch processing traceability.
279-279
: Verify task registration process
This error occurs after successful task import, which suggests a potential issue with task registration. Consider adding logging or verification steps to understand why a task might exist in the manifest but fail to register properly.
✅ Verification successful
Let me gather more information about the task registration and execution flow to better understand the context.
Based on the gathered information, I can now provide a final response:
Task registration and error handling are correctly implemented
The error response with taskIdentifier
is appropriate and not redundant because:
- Tasks are registered in two steps: metadata registration (
registerTaskMetadata
) and file metadata registration (registerTaskFileMetadata
) - The error occurs when
taskCatalog.getTask()
fails to find a task, which can happen if either registration step fails - The error provides valuable debugging information by including the task ID in both the error message and response structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for inconsistencies in task registration process
# Search for task registration patterns
rg -A 5 'taskCatalog\.registerTask|taskCatalog\.registerTaskFileMetadata'
# Search for potential task registration issues in logs
rg -A 5 'Could not find task|Could not find executor'
Length of output: 5032
Script:
#!/bin/bash
# Check the task execution flow and error handling
# Search for task execution and import patterns
ast-grep --pattern 'try {
$$$
const task = await import($$$)
$$$
}'
# Search for task catalog usage patterns
ast-grep --pattern 'const task = taskCatalog.getTask($$$)'
Length of output: 367
apps/webapp/app/env.server.ts (1)
241-241
: LGTM! Verify the batch size limit usage.
The 500-item limit for batch triggers aligns with the PR's objective of enhancing batch processing capabilities.
Let's verify how this limit is enforced across the codebase:
✅ Verification successful
Batch size limit is properly enforced in the API endpoint
The limit of 500 items is correctly enforced in apps/webapp/app/routes/api.v1.tasks.batch.ts
where it validates the batch size against MAX_BATCH_V2_TRIGGER_ITEMS
and returns an error if exceeded. The implementation is consistent with the environment variable's intended usage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how MAX_BATCH_V2_TRIGGER_ITEMS is used in validation logic
# Search for direct references to the env variable
rg "MAX_BATCH_V2_TRIGGER_ITEMS"
# Search for the number 500 in batch-related validation code
rg -A 5 "batch.*valid.*500|500.*batch.*valid"
# Search for batch trigger implementations
ast-grep --pattern 'batch.trigger$$$'
Length of output: 2516
packages/cli-v3/src/entryPoints/deploy-run-worker.ts (5)
219-219
: LGTM: Enhanced error tracking for concurrent execution
The addition of taskIdentifier to the error response helps track which task was blocked due to another task already running.
250-250
: LGTM: Consistent error reporting structure
The taskIdentifier addition maintains consistency in error reporting structure, making it easier for error handlers to process task-related failures uniformly.
309-309
: LGTM: Improved executor resolution tracking
The taskIdentifier addition helps identify which task's executor could not be found, improving debugging of task registration issues.
388-388
: LGTM: Consistent result structure
The taskIdentifier addition maintains consistency between success and error responses, improving observability.
282-282
: LGTM: Enhanced import failure tracking
The taskIdentifier addition helps track which task failed to import. Consider verifying the module resolution paths.
✅ Verification successful
LGTM: Task identifier in error responses is properly implemented
The code correctly includes the task identifier in error responses for both task import failures and task lookup failures. The error handling is comprehensive, providing:
- Clear error messages with task IDs for debugging
- Proper error categorization (COULD_NOT_FIND_TASK, COULD_NOT_IMPORT_TASK)
- Stack traces and detailed error information in the response
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify module resolution paths for tasks
# Test: Check if task entry points exist and are accessible
# Find all task entry points in the manifest
fd -e json -x jq -r '.tasks[].entryPoint' {} \;
# Verify the paths exist
fd -e json -x jq -r '.tasks[].entryPoint' {} \; | xargs -I {} sh -c '[ -f "{}" ] && echo "✓ {} exists" || echo "✗ {} missing"'
Length of output: 14770
Script:
#!/bin/bash
# Let's try a different approach to find task-related files and their imports
# Search for task-related files and their imports
rg -g '*.{ts,js}' -A 5 'export (default )?class.*extends.*Task'
# Look for task registration patterns
rg -g '*.{ts,js}' 'registerTask|addTask|defineTask' -A 2
Length of output: 5687
Script:
#!/bin/bash
# Let's examine the task import mechanism and error handling
# Look for task import related code and error handling
rg -g '*.{ts,js}' -B 3 -A 5 'import.*task.*fail|error.*task.*import'
# Search for the specific error handling around task.id
rg -g '*.{ts,js}' -B 5 -A 5 'task(\.|\[)id.*error'
Length of output: 1131
Script:
#!/bin/bash
# Let's look at the specific file and its context
# Examine the deploy-run-worker.ts file for task import and error handling
rg -g 'deploy-run-worker.ts' -B 10 -A 10 'task.*id.*error|error.*task.*id'
# Look for the error handling implementation around this file
ast-grep --pattern 'catch ($err) {
$$$
task$$$
$$$
}' packages/cli-v3/src/entryPoints/deploy-run-worker.ts
Length of output: 5708
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.runs._index/route.tsx (2)
85-88
: Verify TaskRunListSearchFilters schema update.
The destructuring pattern correctly includes the new parameters. Let's verify that the TaskRunListSearchFilters schema has been properly updated to handle these new parameters.
✅ Verification successful
Schema validation is properly implemented for all parameters
The TaskRunListSearchFilters schema in RunFilters.tsx
includes all the required parameters with appropriate validation:
showChildTasks
as optional booleanrunId
as optional stringbatchId
as optional stringscheduleId
as optional string
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if TaskRunListSearchFilters schema includes the new parameters
# Search for TaskRunListSearchFilters schema definition
ast-grep --pattern 'const TaskRunListSearchFilters = z.object({
$$$
})'
# Alternatively, search for the schema file
rg -l "TaskRunListSearchFilters"
Length of output: 3131
110-113
: Verify RunListPresenter implementation for new parameters.
The parameters are correctly passed to the presenter with proper inversion of showChildTasks
to rootOnly
. Let's verify that the RunListPresenter has been updated to handle these new parameters.
✅ Verification successful
RunListPresenter correctly handles all new parameters
The implementation in RunListPresenter.server.ts
properly supports all the parameters being passed:
batchId
is handled in the options type and query constructionrunId
is properly processed and used in the SQL queryscheduleId
is supported with proper friendly ID resolutionrootOnly
(derived from!showChildTasks
) is correctly implemented with the SQL conditionAND tr."rootTaskRunId" IS NULL
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check RunListPresenter implementation for new parameters
# Search for RunListPresenter class and its call method
ast-grep --pattern 'class RunListPresenter {
$$$
call($params) {
$$$
}
$$$
}'
# Search for any tests covering these new parameters
rg -A 5 "RunListPresenter.*test"
Length of output: 126
Script:
#!/bin/bash
# Let's try different search patterns to locate the RunListPresenter implementation
# Search for RunListPresenter class definition
rg -l "class RunListPresenter"
# Search for files containing RunListPresenter
rg -l "RunListPresenter"
# Search for files with presenter in their name
fd "presenter" --type f
Length of output: 4838
Script:
#!/bin/bash
# Let's examine the v3 RunListPresenter implementation since it's the one being used in the route
# Check the implementation of RunListPresenter.server.ts
cat apps/webapp/app/presenters/v3/RunListPresenter.server.ts
# Also check for any test files related to this presenter
fd "RunListPresenter" --type f --extension test.ts
Length of output: 11787
packages/react-hooks/src/hooks/useRealtime.ts (2)
76-77
: LGTM! Defensive programming practice added.
The addition of the apiClient check alongside runId validation is a good defensive programming practice that prevents potential runtime errors from attempting API calls without a valid client.
Line range hint 76-94
: Verify error handling consistency across hooks.
The error handling pattern is consistent across all hooks, properly handling abort errors and setting other errors. However, let's verify that errors are properly propagated to the error boundary.
Also applies to: 210-229, 324-343, 414-433
✅ Verification successful
Let me gather more information about error handling in the hooks and their integration with error boundaries.
Based on the analysis of the shell script results, I can now provide a final response:
Error handling is properly implemented and propagated to error boundaries
The error handling pattern in the useRealtime
hooks is consistent and well-integrated with React's error handling system:
- All hooks properly catch and handle AbortError separately
- Other errors are properly typed and set via
setError(err as Error)
- Components using these hooks consistently handle errors by displaying them through
RouteErrorDisplay
- Error boundaries are properly implemented at multiple levels (root, app, organization, project routes)
- The error display pattern is uniform across the application with proper error boundary fallbacks
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent error handling patterns across the codebase
# Look for error boundary implementations
rg -A 5 "ErrorBoundary|componentDidCatch|getDerivedStateFromError"
# Check for error handling patterns in other realtime hooks
rg -A 5 "setError\(.*as Error\)"
Length of output: 4331
Script:
#!/bin/bash
# Check how errors from hooks are caught and displayed
# Look for error state handling in components using these hooks
rg -A 5 "useRealtime.*"
# Check for error display components
rg -A 5 "RouteErrorDisplay"
# Look for error state handling patterns
ast-grep --pattern 'const [$_] = useState<Error>'
Length of output: 34895
apps/webapp/app/components/navigation/SideMenu.tsx (3)
15-15
: LGTM: Icon import follows conventions
The Squares2X2Icon import follows the established pattern of using solid heroicons.
49-49
: LGTM: Path import follows v3 naming convention
The v3BatchesPath import follows the established pattern of v3-prefixed path builders.
480-486
: LGTM: Menu item follows established patterns
The Batches menu item is well-integrated with consistent styling and proper navigation setup. The placement after Runs is logical for the batch processing workflow.
Let's verify the v3BatchesPath route is properly set up:
✅ Verification successful
Route and path implementation verified
The v3BatchesPath is properly implemented in pathBuilder.ts, correctly constructing the URL path by extending the project path with "/batches". The route file exists at the correct location (as seen in previous search results), and the path is being used consistently in both the SideMenu component and route handling. The menu item implementation follows the established patterns and integrates with the routing system.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the v3BatchesPath route exists and is consistent
# Look for route definition and any related components
# Search for v3BatchesPath definition
rg "export.*v3BatchesPath.*=" -A 3
# Search for route components that handle this path
rg "v3BatchesPath|route.*batches" --type tsx
Length of output: 111
Script:
#!/bin/bash
# Let's try different approaches to find the route and path definitions
# Search for batches-related paths and routes without file type restriction
rg "v3BatchesPath|route.*batches"
# Search for any file containing "batches" in the path or name
fd -i batches
# Look for route definitions or imports related to batches
rg "import.*[Bb]atches|export.*[Bb]atches"
# Search for path construction patterns
rg "path.*batch|batch.*path" -i
Length of output: 2130
Script:
#!/bin/bash
# Let's verify the implementation of v3BatchesPath and its route handler
# Check the v3BatchesPath implementation
cat apps/webapp/app/utils/pathBuilder.ts | grep -A 2 "export function v3BatchesPath"
# Check the route implementation
cat "apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx"
Length of output: 399
apps/webapp/app/components/primitives/Select.tsx (2)
616-616
: LGTM! Height increase aligns with batch processing needs
Increasing the max height of the select popover from 520px to 600px is a good enhancement that will accommodate more items in batch scenarios while maintaining responsive behavior through the CSS min function.
443-443
: Verify the custom text size class
The text size class text-2sm
appears to be non-standard. Please verify that this custom class is defined in your Tailwind CSS theme configuration.
✅ Verification successful
The text-2sm
class is correctly defined in the theme
The text-2sm
class is a valid custom font size defined in the Tailwind configuration with a size of 0.8125rem
. This class is consistently used across multiple components in the codebase for small interactive elements like menu items, select items, and buttons. The change is aligned with the existing design system.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for text-2sm class definition in Tailwind config
fd -e js -e ts -e json "tailwind.config" -x cat {} | grep -A 5 -B 5 "text-2sm"
Length of output: 78
Script:
#!/bin/bash
# Let's try a broader search for text-2sm in the codebase
rg "text-2sm" -A 3 -B 3
# Also search for fontSize in tailwind config to understand custom text sizes
fd -e js -e ts -e json "tailwind.config" -x cat {} | grep -A 10 -B 10 "fontSize"
Length of output: 4701
apps/webapp/app/utils/pathBuilder.ts (1)
440-442
: LGTM! Path builder function for batch listing.
The implementation follows the established patterns and naming conventions.
apps/webapp/app/services/worker.server.ts (2)
58-58
: LGTM: Import statement follows established patterns
The import statement for batch processing components follows the project's established naming and organization patterns.
201-201
: LGTM: Worker catalog entry properly configured
The new batch processing task is correctly registered in the worker catalog with appropriate schema validation.
internal-packages/database/prisma/schema.prisma (3)
1695-1699
: Good denormalization for performance optimization.
The denormalized version fields (taskVersion
, sdkVersion
, cliVersion
) will help reduce joins and improve query performance when these fields are frequently accessed.
Line range hint 1799-1814
: Well-designed indexes for common query patterns.
The added indexes will optimize:
- Task activity graphs (
projectId, createdAt, taskIdentifier
) - Run list views (
projectId, id DESC
) - Status filtering (
projectId, status
) - Batch operations (
runtimeEnvironmentId, batchId
)
2173-2174
: Good unique constraint for idempotency.
The unique constraint on [runtimeEnvironmentId, idempotencyKey]
is well-designed for preventing duplicate batch runs within the same environment.
apps/webapp/app/v3/r2.server.ts (2)
Line range hint 22-61
: Function uploadPacketToObjectStore
updated correctly
The function has been updated to accept ReadableStream | string
for the data
parameter and handles uploads appropriately with necessary error handling.
64-107
: New function downloadPacketFromObjectStore
implemented correctly
The addition of downloadPacketFromObjectStore
enhances functionality by allowing packets to be downloaded from the object store with proper error handling and logging.
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (1)
182-190
: Ensure all terminal statuses are considered for 'hasFinished'
Currently, only the COMPLETED
status is checked to determine if a batch has finished. If there are other terminal statuses like FAILED
or CANCELLED
, they should be included.
Please verify if other statuses indicate completion and update accordingly.
- const hasFinished = batch.status === "COMPLETED";
+ const terminalStatuses = ["COMPLETED", "FAILED", "CANCELLED"] as BatchTaskRunStatus[];
+ const hasFinished = terminalStatuses.includes(batch.status);
apps/webapp/app/v3/services/resumeBatchRun.server.ts (2)
177-184
: Handle potential for empty completedAttemptIds
array
When resuming the dependent run without a checkpoint, the completedAttemptIds
array is populated from batchRun.items
. Ensure that if batchRun.items
is empty or none have taskRunAttemptId
, the array doesn't cause issues downstream.
Confirm that the system can handle an empty completedAttemptIds
array without errors. If necessary, add a check to handle this case.
155-163
: Investigate potential for unhandled states when dependentTaskAttempt
is paused without a checkpoint
The code logs an error when the dependentTaskAttempt
is paused but there is no checkpointEventId
. It mentions that in race conditions, this state can occur and that the checkpoint will later continue the run. Ensure that this situation is adequately handled and won't lead to the batch run being left in an incomplete state.
Run the following script to search for any batch runs that might be left incomplete due to this state:
This script assumes you have access to run SQL queries against your database.
✅ Verification successful
Based on the code analysis, I can now provide a final response:
The PAUSED state without checkpoint is properly handled as a transient race condition
The code shows a well-designed handling of this state:
- The condition is explicitly handled in both
resumeBatchRun.server.ts
andresumeTaskDependency.server.ts
services - When a task is PAUSED without a checkpoint, it's logged as an error but treated as a transient state
- The code comments explicitly document that this is a race condition where the checkpoint creation is in progress
- The system will automatically continue the run once the checkpoint is created, as evidenced by the checkpoint handling logic in
sharedQueueConsumer.server.ts
- The database schema (
schema.prisma
) shows proper relationships between checkpoints and runs with appropriate cascade behaviors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find batch runs with paused dependent attempts but no checkpoint event.
# Search the database for such batch runs (this is a pseudo-code representation)
psql -c "
SELECT btr.id, btr.status
FROM batch_task_runs btr
JOIN task_run_attempts tra ON btr.dependent_task_attempt_id = tra.id
WHERE tra.status = 'PAUSED' AND btr.checkpoint_event_id IS NULL;
"
Length of output: 299
Script:
#!/bin/bash
# Let's analyze the codebase to understand how this state is handled
# First, let's find all references to PAUSED status and checkpoint handling
rg -A 5 "status === \"PAUSED\""
# Also search for checkpoint event handling
rg -A 5 "checkpointEventId"
# Look for any error handling or recovery mechanisms for paused states
ast-grep --pattern 'if ($_ === "PAUSED") { $$$ }'
Length of output: 26541
apps/webapp/app/v3/services/finalizeTaskRun.server.ts (2)
85-93
: Good error handling when finalizing batch
Wrapping the call to this.#finalizeBatch(run)
in a try-catch
block ensures that any errors during batch finalization are caught and logged without disrupting the overall flow of the call
method.
193-196
: Verify the conditional logic in batch item processing
The condition if (environment.type !== "DEVELOPMENT" && item.batchTaskRun.dependentTaskAttemptId)
skips processing certain batch items. Please verify that this logic aligns with the intended behavior, ensuring that batch items with a dependentTaskAttemptId
are correctly handled in non-development environments.
To confirm the condition's correctness, consider reviewing related code segments handling batch processing for different environment types.
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (5)
234-235
: Ensure consistent display of finished batches
When displaying batch.finishedAt
, the code shows "–" if batch.finishedAt
is null. Verify that this correctly represents batches that are not finished and consider providing additional context if necessary.
Confirm that batch.finishedAt
being null accurately reflects a batch in progress.
205-239
: Add unique keys when rendering list items
When mapping over batches
, the key
prop is set to batch.id
. Ensure that batch.id
is unique across all batches to prevent potential rendering issues.
Confirm that batch.id
is a unique identifier.
78-81
: Validate date filters properly
The from
and to
query parameters are being parsed directly from the URL. Ensure that these dates are validated and sanitized to prevent potential injection attacks or errors.
[security]
Consider parsing and validating the dates using a library like date-fns
or luxon
.
import { parseISO, isValid } from "date-fns";
const s = {
// ...
from: url.searchParams.get("from") ?? undefined,
to: url.searchParams.get("to") ?? undefined,
};
+ const fromDate = s.from ? parseISO(s.from) : undefined;
+ const toDate = s.to ? parseISO(s.to) : undefined;
+ if (fromDate && !isValid(fromDate)) {
+ throw new Error("Invalid 'from' date");
+ }
+ if (toDate && !isValid(toDate)) {
+ throw new Error("Invalid 'to' date");
+ }
226-228
:
Handle potential timezone differences in LiveTimer
When using LiveTimer
with dates, ensure that timezones are correctly accounted for to prevent display inaccuracies, especially if batch.createdAt
is in UTC and the user's browser is in a different timezone.
Ensure that batch.createdAt
is consistently stored and used in UTC.
If necessary, adjust the date handling to account for timezones.
<LiveTimer startTime={new Date(batch.createdAt /* Ensure UTC time */)} />
88-90
: Verify project existence check logic
The current logic checks if !project
and redirects if the project is not found. Ensure that this logic aligns with how findProjectBySlug
behaves, especially regarding user permissions and access rights.
Run the following script to confirm that findProjectBySlug
correctly handles permission checks:
✅ Verification successful
Let me gather more information about the implementation of findProjectBySlug
to verify the permission checks.
Project existence check is correctly implemented with permission checks
The implementation of findProjectBySlug
includes proper permission checks by ensuring the user belongs to the organization through the query condition members: { some: { userId } }
. The !project
check in the route handler is appropriate as it will be null both when the project doesn't exist and when the user doesn't have access.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that findProjectBySlug checks user permissions
# Search for the implementation of findProjectBySlug
rg 'export async function findProjectBySlug' --type ts
# Check if it includes permission checks related to userId
ast-grep --pattern 'function findProjectBySlug($_, $_, $userId) {
$$$
}' --lang typescript
Length of output: 288
Script:
#!/bin/bash
# Get the implementation of findProjectBySlug with context
rg -A 20 'export async function findProjectBySlug' apps/webapp/app/models/project.server.ts
# Also check for any permission-related utility functions that might be used
rg 'checkPermission|hasPermission|canAccess' apps/webapp/app/models/project.server.ts
Length of output: 870
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (4)
25-27
: Addition of new filter parameters enhances functionality
The introduction of rootOnly
, batchId
, and runId
to RunListOptions
extends the filtering capabilities, allowing for more granular run queries.
75-78
: Correct inclusion of new filters in hasFilters
logic
Updating the hasFilters
condition to include batchId
, runId
, and rootOnly
ensures accurate determination of active filters.
153-167
: Efficient resolution of batchId
from friendly ID
The logic correctly resolves the batchId
from a friendly ID to an internal ID, facilitating accurate database queries.
244-244
: Clarify the selection of tags
in the SQL query
Ensure that tr."runTags"
is consistently returned as an array, even when it's NULL
, to prevent potential issues during data mapping.
Run the following script to confirm the data type of runTags
:
✅ Verification successful
Based on the search results, I can now provide a definitive response about the runTags
column:
The runTags
column is properly handled in the SQL query
The implementation shows that:
runTags
is defined as a text array (TEXT[]
) in the database schema- The SQL query properly handles array operations:
- Selection:
tr."runTags" AS "tags"
correctly retrieves the array - Filtering: Uses PostgreSQL array operators (
&&
) for array intersection when filtering by tags
- Selection:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that 'runTags' is defined as a non-nullable array in the database schema.
# Check the table definition for 'TaskRun'
psql -c "\d+ ${sqlDatabaseSchema}.\"TaskRun\""
Length of output: 121
Script:
#!/bin/bash
# Let's try to find the schema definition and any related files
fd -e sql -e prisma . | xargs cat
# Also search for any TypeScript type definitions or interfaces related to TaskRun
ast-grep --pattern 'interface TaskRun {
$$$
}'
# And check for any Prisma schema definitions
fd schema.prisma | xargs cat
Length of output: 131249
Script:
#!/bin/bash
# Let's look for the SQL query definition and any related type definitions
rg -A 5 -B 5 'tr."runTags"'
# Also search for any SQL migrations or schema changes related to runTags
rg -g '*.sql' -A 3 -B 3 'runTags'
Length of output: 2337
packages/trigger-sdk/src/v3/runs.ts (2)
157-157
: Verify Impact of Extending RunId
Type
By adding AnyBatchedRunHandle
to the RunId
type, you're broadening the accepted types for run identifiers. Please ensure that all functions and methods using RunId
can handle AnyBatchedRunHandle
appropriately. This includes verifying that type inferences and overloads remain correct and that no unintended side effects arise in other parts of the codebase.
165-165
: Ensure retrieveRun
Handles AnyBatchedRunHandle
Correctly
The retrieveRun
function now accepts AnyBatchedRunHandle
as part of its generic type parameter. Confirm that:
- The function correctly processes batched run handles.
- The returned
RetrieveRunResult
aligns with expectations when using a batched run handle. - Existing functionality for other run handle types remains unaffected.
Adding relevant test cases for batched run handles would be beneficial to validate this change.
apps/webapp/app/v3/marqs/devQueueConsumer.server.ts (1)
381-383
: Verify that taskVersion
, sdkVersion
, and cliVersion
are defined in the TaskRun
model
Ensure that the TaskRun
model in the Prisma schema includes the new fields taskVersion
, sdkVersion
, and cliVersion
, and that the corresponding database migrations have been applied. Without these fields in the schema and database, this update operation will fail at runtime.
Run the following script to verify the presence of these fields in the Prisma schema:
✅ Verification successful
The fields are properly defined in the database schema
The migrations show that all three fields (taskVersion
, sdkVersion
, and cliVersion
) have been added to the TaskRun
table through proper migrations:
taskVersion
was added in migration20241126110327
sdkVersion
andcliVersion
were added in migration20241126110902
These fields are nullable in the database, which aligns with their usage in the code. The update operation will work correctly at runtime.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that `taskVersion`, `sdkVersion`, and `cliVersion` are defined in the `TaskRun` model in the Prisma schema.
# Expected: The fields should be defined within the `TaskRun` model.
# Search for the `TaskRun` model definition including these fields
ast-grep --pattern $'model TaskRun {
$$$
taskVersion String?
sdkVersion String?
cliVersion String?
$$$
}' **/*.prisma
Length of output: 189
Script:
#!/bin/bash
# First, let's find the Prisma schema files
fd -e prisma
# Then, let's examine the TaskRun model definition
rg "model TaskRun" -A 30 -B 2
Length of output: 10078
Script:
#!/bin/bash
# Let's check if there are any migrations that add these columns
rg "taskVersion|sdkVersion|cliVersion" internal-packages/database/prisma/migrations/
Length of output: 1167
apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (1)
Line range hint 349-374
: Ensure all authorization.resource
functions accommodate the new body
parameter
The addition of the body
parameter to the authorization.resource
function signature requires updates to all existing implementations of resource
functions. Existing code may result in unexpected behavior or errors if the body
parameter is not included.
Run the following script to identify all instances where authorization.resource
functions need to be updated:
packages/core/src/v3/apiClient/index.ts (7)
74-77
: Ensure consistency in option types naming and structure.
The ClientBatchTriggerOptions
extends ClientTriggerOptions
and adds idempotencyKey
and idempotencyKeyTTL
. Verify that this naming and extension pattern aligns with the project's conventions and that similar option types follow a consistent structure.
89-92
: Review the retry strategy adjustments.
The DEFAULT_ZOD_FETCH_OPTIONS
has increased maxAttempts
from 3 to 5 and changed the factor
from 2 to 1.6. Ensure that these changes align with desired retry behaviors and do not inadvertently cause longer delays or unintended consequences in request handling.
Line range hint 231-260
: Ensure idempotency headers are correctly set in batchTriggerV2
.
In the batchTriggerV2
method, idempotency headers are added when clientOptions
include idempotencyKey
and idempotencyKeyTTL
. Verify that when these values are not provided, the headers are omitted or set appropriately to prevent unintended behavior on the server side.
Also, consider renaming idempotency-key
and idempotency-key-ttl
to use proper casing if the API expects headers in a specific format, such as Idempotency-Key
and Idempotency-Key-TTL
.
674-684
: Verify the new retrieveBatch
method integration.
The retrieveBatch
method has been added. Ensure that:
- It aligns with the API specifications.
- Proper error handling is in place.
- The method is unit-tested.
- Documentation is updated accordingly.
Would you like help writing unit tests or updating the documentation for this new method?
809-812
: Ensure batch filtering works as expected in createSearchQueryForListRuns
.
The addition of the batch parameter in the search query should be tested to confirm that filtering runs by batch ID functions correctly.
33-33
: Consider the impact of importing AnyRunTypes
from "../types/tasks.js"
.
Ensure that importing AnyRunTypes
does not introduce circular dependencies or unintended side effects, especially if tasks.js
depends on apiClient
.
691-700
:
Add null-checking for additionalHeaders
in #getHeaders
.
While reducing additionalHeaders
, if it's undefined
, it could cause an error. Ensure that additionalHeaders
is always an object before calling Object.entries
or provide a default empty object.
Apply this fix:
- ...Object.entries(additionalHeaders ?? {}).reduce(
+ ...Object.entries(additionalHeaders || {}).reduce(
Or initialize additionalHeaders
:
const headers: Record<string, string> = {
"Content-Type": "application/json",
Authorization: `Bearer ${this.accessToken}`,
"trigger-version": VERSION,
+ ...Object.entries(additionalHeaders ?? {}).reduce(
Likely invalid or redundant comment.
references/v3-catalog/src/trigger/batch.ts (1)
301-318
: Proper handling of batch size limit with informative error messages
The code correctly tests the batch trigger with more items than allowed and appropriately handles the error. This ensures that batch size limits are enforced and informative error messages are provided to the user.
packages/core/src/v3/types/tasks.ts (24)
52-54
: Updated Generic Parameters in TaskRunPromise
The changes to TaskRunPromise
class by adding TIdentifier
and TOutput
as generic parameters enhance type safety and are correctly applied throughout the class definition.
57-61
: Updated resolve
Method Parameter Types
The resolve
method's parameter types have been correctly updated to reflect the new generic parameters TIdentifier
and TOutput
, ensuring consistency within the class.
64-64
: Updated taskId
Property with Generic Type
The taskId
property now utilizes the generic type TIdentifier
, aligning with the updated class definition and enhancing type specificity.
69-69
: Updated Return Type of unwrap
Method
The unwrap
method's return type has been updated to Promise<TOutput>
, which accurately represents the expected output type of the promise.
380-389
: Introduced BatchedRunHandle
Type
The new BatchedRunHandle
type effectively encapsulates batched run information, enhancing type safety and clarity for batched operations.
391-392
: Defined AnyBatchedRunHandle
Type
The AnyBatchedRunHandle
type provides a generic alias for BatchedRunHandle<string, any, any>
, simplifying the handling of batched runs with unspecified types.
424-434
: Updated TaskRunResult
Type with TIdentifier
The TaskRunResult
type now includes the generic TIdentifier
, enhancing type specificity for task results and better representing the task identifier in the result.
438-439
: Defined AnyTaskRunResult
Type
The AnyTaskRunResult
type provides a generic task run result type with flexible parameters, improving usability when the specific types are not known.
440-447
: Added TaskRunResultFromTask
Utility Type
The new TaskRunResultFromTask
utility type maps a Task
to its corresponding TaskRunResult
, improving type inference and ensuring consistency between tasks and their results.
486-487
: Introduced BatchTasksResult
Type
The BatchTasksResult
type alias simplifies the return type for batch task results by using BatchTasksRunHandle
, enhancing type safety for batched operations.
488-489
: Added BatchItem
Type
The BatchItem
type defines the structure for items in a batch trigger, including the payload and optional trigger options, providing clarity for batch operations.
490-494
: Added BatchTriggerAndWaitItem
Type
The BatchTriggerAndWaitItem
type defines items for batch triggers that wait for results, including the payload and optional trigger-and-wait options, enhancing type safety.
495-500
: Introduced BatchByIdItem
Type
The BatchByIdItem
type defines items for batch triggering by ID, aligning payloads and options with run types, and improving flexibility in batch operations.
501-506
: Added BatchByIdAndWaitItem
Type
The BatchByIdAndWaitItem
type supports batch triggers by ID that wait for results, utilizing generic run types to enhance type safety and consistency.
507-512
: Added BatchByTaskItem
Type
The BatchByTaskItem
type defines batch items based on a task, improving type safety and clarity when batching tasks directly.
513-517
: Added BatchByTaskAndWaitItem
Type
The BatchByTaskAndWaitItem
type supports batch triggers by task with waiting for results, enhancing type safety and consistency in batch operations.
536-538
: Updated trigger
Method Signature in Task
Interface
The trigger
method now returns a Promise<RunHandle<TIdentifier, TInput, TOutput>>
, aligning with the new run handle types and improving type safety.
549-551
: Updated batchTrigger
Method Signature
The batchTrigger
method's signature now returns a Promise<BatchRunHandle<TIdentifier, TInput, TOutput>>
, reflecting the introduction of batch run handles and enhancing type safety.
569-572
: Updated triggerAndWait
Method Signature
The triggerAndWait
method now returns a TaskRunPromise<TIdentifier, TOutput>
, enhancing type safety when waiting for task results and aligning with the updated TaskRunPromise
class.
594-596
: Updated batchTriggerAndWait
Method Signature
The batchTriggerAndWait
method returns a Promise<BatchResult<TIdentifier, TOutput>>
, consistent with the new batch result type definitions and improving type safety.
649-653
: Added TaskFromIdentifier
Utility Type
The TaskFromIdentifier
type extracts a Task
from a given identifier, improving type safety and utility when working with tasks based on their identifiers.
Line range hint 663-716
: Verify Omitted Fields in TriggerAndWaitOptions
In TriggerAndWaitOptions
, you omit "idempotencyKey"
and "idempotencyKeyTTL"
from TriggerOptions
using Omit
. Please ensure that omitting these fields is intentional and that the options behave as expected, especially concerning idempotency behavior when waiting for results.
779-784
: Confirm Completeness of BatchTriggerOptions
Type
The BatchTriggerOptions
type only includes idempotencyKey
and idempotencyKeyTTL
. Please verify if other properties from TriggerOptions
should be included to provide complete configurability when triggering batches, such as delay
, tags
, and metadata
.
830-833
: Assess Conditional Type in BatchRunHandleFromTypes
Ensure that the conditional type in BatchRunHandleFromTypes
correctly captures all necessary cases and that the mapping of generic parameters is accurate for consistent type inference.
packages/trigger-sdk/src/v3/shared.ts (27)
26-26
: Inclusion of TaskFromIdentifier
in imports
The addition of TaskFromIdentifier
to the import list is appropriate for type definitions and enhances type safety.
35-40
: Addition of batch-related types enhances batch processing capabilities
Introducing types like BatchByIdAndWaitItem
, BatchByTaskAndWaitItem
, BatchByIdItem
, BatchByTaskItem
, BatchByTaskResult
, and BatchByIdResult
supports the new batch processing features and improves type safety.
45-47
: Adding BatchTasksRunHandleFromTypes
, BatchTriggerAndWaitItem
, and BatchTriggerOptions
These additions are necessary for defining return types and options related to batch triggers.
66-66
: Exporting TaskRunResultFromTask
for external use
Exporting this type makes it available for external modules, improving the SDK's extensibility.
73-76
: Exporting additional types for trigger options and task results
The export of TriggerAndWaitOptions
, TriggerOptions
, and AnyTaskRunResult
enhances flexibility in how these types can be used externally.
85-85
: Exporting BatchTriggerOptions
Making BatchTriggerOptions
available externally aligns with the new batch processing capabilities.
99-100
: Consistent export of TriggerOptions
and TaskFromIdentifier
Ensures these types are consistently available across modules.
Line range hint 145-154
: Adding batchTrigger
method to createTask
The batchTrigger
method enables batch processing for tasks, improving efficiency when triggering multiple task runs.
163-164
: Introducing TaskRunPromise
in triggerAndWait
Wrapping the triggerAndWait_internal
call within TaskRunPromise
enhances promise handling and provides a consistent interface for awaiting task results.
187-187
: Adding batchTriggerAndWait
method to createTask
This method allows for batch triggering tasks and waiting for their completion, aligning with the enhanced batch processing functionality.
Line range hint 290-299
: Implementing batchTrigger
in createSchemaTask
with payload parsing
The batchTrigger
method correctly incorporates schema parsing through parsePayload
, ensuring that payloads conform to the defined schema before processing.
308-309
: Using TaskRunPromise
in triggerAndWait
with schema tasks
Consistently applies TaskRunPromise
to handle asynchronous execution and result resolution in schema-based tasks.
332-332
: Adding batchTriggerAndWait
to createSchemaTask
Enables batch triggering with schema validation and waits for the completion of all tasks, ensuring type safety and correctness.
390-390
: Extending trigger
function with TriggerOptions
and TriggerApiRequestOptions
Enhances the trigger
function by allowing additional options, improving flexibility in task execution.
424-427
: Refactoring triggerAndWait
to return TaskRunPromise
Provides a consistent and type-safe way to handle task execution results, improving developer experience.
473-478
: Implementing batchTriggerAndWait
function for batch processing
Adds the ability to trigger multiple tasks and wait for their completion, which is essential for batch operations.
499-499
: Enhancing triggerAndPoll
with combined options
Merges TriggerOptions
and PollOptions
to allow for flexible configuration when triggering and polling tasks.
510-510
: Adding batchTrigger
function with batch options
Introduces batch triggering capabilities with optional configurations, facilitating batch operations.
609-681
: Adding batchTriggerByIdAndWait
for synchronous batch processing
This function enhances batch capabilities by allowing the caller to wait for the completion of all triggered tasks.
683-769
: Implementing batchTriggerTasks
and batchTriggerAndWaitTasks
for diverse task batches
These functions extend batch processing to handle arrays of different tasks, increasing flexibility in batch operations.
Line range hint 852-894
: Enhancing trigger_internal
with additional options
Including parsePayload
and request options improves payload handling and request customization for task triggering.
Line range hint 905-982
: Refining batchTrigger_internal
for batch processing
The batchTrigger_internal
function is central to batch operations, and the enhancements support custom queues and payload parsing.
Line range hint 989-1042
: Improving triggerAndWait_internal
with structured tracing and context handling
The use of OpenTelemetry spans and context attributes enhances observability and debugging capabilities.
Line range hint 1061-1122
: Updating batchTriggerAndWait_internal
to handle batch waits with context
Extending this function improves batch processing by waiting for all tasks to complete and properly handling execution context.
1146-1180
: Optimizing batch task run execution result handling
The handleBatchTaskRunExecutionResult
function processes batch results efficiently, including handling object store outputs when necessary.
Line range hint 1182-1217
: Adding handleBatchTaskRunExecutionResultV2
for versatile result handling
The new function accommodates varied task identifiers and improves robustness in processing batch execution results.
1217-1235
: Ensuring correct handling of task run execution results
The handleTaskRunExecutionResult
function correctly processes both successful and failed task executions, including error creation and output parsing.
apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts (2)
410-412
: Addition of version information to task runs
Including taskVersion
, sdkVersion
, and cliVersion
in the taskRun
update enhances tracking and debugging by recording the versions involved in each task run.
1041-1041
: Include taskIdentifier
in execution results
Adding taskIdentifier
to both successful and failed execution results improves traceability and consistency across the system.
Also applies to: 1049-1049
|
||
//is it an int? | ||
const int = parseInt(value); | ||
if (!isNaN(int)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Number.isNaN
instead of global isNaN
Using the global isNaN
function can lead to unexpected results due to type coercion. It's safer to use Number.isNaN
, which doesn't coerce its argument.
Apply this diff to fix the issue:
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!isNaN(int)) { | |
if (!Number.isNaN(int)) { |
🧰 Tools
🪛 Biome (1.9.4)
[error] 916-916: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
> | ||
<div className="flex flex-col gap-4 p-3"> | ||
<div className="flex flex-col gap-1"> | ||
<Label>Batch ID</Label> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the Label to 'Schedule ID' in ScheduleIdDropdown
The Label
component in the ScheduleIdDropdown
function incorrectly displays 'Batch ID'. It should be updated to 'Schedule ID' to reflect the proper context.
Apply this diff to fix the label:
- <Label>Batch ID</Label>
+ <Label>Schedule ID</Label>
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
<Label>Batch ID</Label> | |
<Label>Schedule ID</Label> |
if (!scheduleId.startsWith("sched")) { | ||
error = "Schedule IDs start with 'sched_'"; | ||
} else if (scheduleId.length !== 27) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure consistent schedule ID prefix check and error message
The scheduleId
validation checks if it starts with 'sched'
, but the error message states that schedule IDs start with 'sched_'
. To avoid confusion, update the check to match the error message.
Apply this diff to correct the prefix check:
- if (!scheduleId.startsWith("sched")) {
+ if (!scheduleId.startsWith("sched_")) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!scheduleId.startsWith("sched")) { | |
error = "Schedule IDs start with 'sched_'"; | |
} else if (scheduleId.length !== 27) { | |
if (!scheduleId.startsWith("sched_")) { | |
error = "Schedule IDs start with 'sched_'"; | |
} else if (scheduleId.length !== 27) { |
|
||
//is it an int? | ||
const int = parseInt(value); | ||
if (!isNaN(int)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Number.isNaN
instead of isNaN
for safe type checking
Using the global isNaN
function can lead to unexpected results due to type coercion. It's safer to use Number.isNaN
, which doesn't perform type coercion.
Apply this diff to update the function:
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!isNaN(int)) { | |
if (!Number.isNaN(int)) { |
🧰 Tools
🪛 Biome (1.9.4)
[error] 1516-1516: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (9)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (4)
98-98
: Use lowercase primitive type 'bigint'For consistency with TypeScript conventions, use the lowercase primitive type.
- runCount: BigInt; + runCount: bigint;🧰 Tools
🪛 Biome (1.9.4)
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead(lint/complexity/noBannedTypes)
127-132
: Document the version check in status filterThe SQL query excludes v1 batches (
b.version <> 'v1'
) when filtering by status. This constraint should be documented to explain why v1 batches are excluded.Consider adding a comment explaining the version check:
${ statuses && statuses.length > 0 ? Prisma.sql`AND b.status = ANY(ARRAY[${Prisma.join( statuses - )}]::"BatchTaskRunStatus"[]) AND b.version <> 'v1'` + )}]::"BatchTaskRunStatus"[]) AND b.version <> 'v1' /* Exclude v1 batches as they use a different status model */` : Prisma.empty }
153-169
: Add error handling for edge cases in paginationThe pagination logic should handle edge cases where
batches
array might be empty.switch (direction) { case "forward": + if (batches.length === 0) { + break; + } previous = cursor ? batches.at(0)?.id : undefined; if (hasMore) { next = batches[pageSize - 1]?.id; } break; case "backward": + if (batches.length === 0) { + break; + } batches.reverse(); if (hasMore) { previous = batches[1]?.id; next = batches[pageSize]?.id; } else { next = batches[pageSize - 1]?.id; } break; }
197-197
: Add bounds check for BigInt to Number conversionThe conversion from
bigint
tonumber
could lose precision or throw for very large values.- runCount: Number(batch.runCount), + runCount: batch.runCount <= BigInt(Number.MAX_SAFE_INTEGER) + ? Number(batch.runCount) + : Number.MAX_SAFE_INTEGER,apps/webapp/app/presenters/v3/SpanPresenter.server.ts (1)
320-320
: Consider updating type definitionsWhile the batch property implementation is correct, consider updating the corresponding type definitions (e.g.,
SpanRun
type) to include this new property for better type safety.Example type update:
export type SpanRun = NonNullable<NonNullable<Result>["run"]> & { batch?: { friendlyId: string; }; };apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (3)
132-134
: Enhance pagination controls accessibilityThe pagination controls should have proper ARIA labels for better screen reader support.
<div className="flex items-center justify-end gap-x-2"> + <div role="navigation" aria-label="Batch list pagination"> <ListPagination list={{ pagination }} /> + </div> </div>
216-234
: Extract batch version check into a separate componentThe batch version check logic could be extracted into a reusable component for better maintainability.
+ function BatchStatusIndicator({ batch }) { + if (batch.batchVersion === "v1") { + return ( + <SimpleTooltip + content="Upgrade to the latest SDK for batch statuses to appear." + disableHoverableContent + button={ + <span className="flex items-center gap-1"> + <ExclamationCircleIcon className="size-4 text-slate-500" /> + <span>Legacy batch</span> + </span> + } + /> + ); + } + + return ( + <SimpleTooltip + content={descriptionForBatchStatus(batch.status)} + disableHoverableContent + button={<BatchStatusCombo status={batch.status} />} + /> + ); + } <TableCell to={path}> - {batch.batchVersion === "v1" ? ( - <SimpleTooltip - content="Upgrade to the latest SDK for batch statuses to appear." - disableHoverableContent - button={ - <span className="flex items-center gap-1"> - <ExclamationCircleIcon className="size-4 text-slate-500" /> - <span>Legacy batch</span> - </span> - } - /> - ) : ( - <SimpleTooltip - content={descriptionForBatchStatus(batch.status)} - disableHoverableContent - button={<BatchStatusCombo status={batch.status} />} - /> - )} + <BatchStatusIndicator batch={batch} /> </TableCell>
157-187
: Add aria-sort attributes to sortable columnsThe table header cells should indicate their sortable state for better accessibility.
<TableHeader> <TableRow> - <TableHeaderCell>ID</TableHeaderCell> + <TableHeaderCell aria-sort="none">ID</TableHeaderCell> - <TableHeaderCell>Env</TableHeaderCell> + <TableHeaderCell aria-sort="none">Env</TableHeaderCell> <TableHeaderCell tooltip={/*...*/} + aria-sort="none" > Status </TableHeaderCell> - <TableHeaderCell>Runs</TableHeaderCell> + <TableHeaderCell aria-sort="none">Runs</TableHeaderCell> - <TableHeaderCell>Duration</TableHeaderCell> + <TableHeaderCell aria-sort="none">Duration</TableHeaderCell> - <TableHeaderCell>Created</TableHeaderCell> + <TableHeaderCell aria-sort="none">Created</TableHeaderCell> - <TableHeaderCell>Finished</TableHeaderCell> + <TableHeaderCell aria-sort="none">Finished</TableHeaderCell> </TableRow> </TableHeader>apps/webapp/app/utils/pathBuilder.ts (1)
444-458
: Create a named type for batch parameters.The batch parameter is using an inline object type
{ friendlyId: string }
. For consistency with other path parameter types in this file (likeOrgForPath
,ProjectForPath
), consider creating a named type.Add this type declaration at the top of the file with other path parameter types:
export type BatchForPath = Pick<Batch, "friendlyId">;Then update the function signatures:
export function v3BatchPath( organization: OrgForPath, project: ProjectForPath, - batch: { friendlyId: string } + batch: BatchForPath ) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
apps/webapp/app/components/runs/v3/BatchStatus.tsx
(1 hunks)apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
(1 hunks)apps/webapp/app/presenters/v3/SpanPresenter.server.ts
(2 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx
(1 hunks)apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.v3.$projectParam.runs.$runParam.spans.$spanParam/route.tsx
(2 hunks)apps/webapp/app/utils/pathBuilder.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/webapp/app/components/runs/v3/BatchStatus.tsx
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
🔇 Additional comments (11)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (3)
30-43
: LGTM! Well-structured class with clear parameter defaults
The class structure and method signature are well-designed with appropriate default values.
87-87
:
Handle potential undefined value from 'parse' function
The parse
function may return null
or undefined
if the period
string is invalid.
const periodMs = period ? parse(period) : undefined;
+if (period && periodMs === null) {
+ throw new Error(`Invalid period value: ${period}`);
+}
55-80
:
Restrict project access to the user's organization
The project query should be scoped to the user's organization to prevent unauthorized access.
const project = await this._replica.project.findFirstOrThrow({
select: {
// existing selections
},
where: {
id: projectId,
+ organization: {
+ members: {
+ some: {
+ userId: userId,
+ },
+ },
+ },
},
});
apps/webapp/app/presenters/v3/SpanPresenter.server.ts (2)
152-156
: LGTM! Clean addition of batch selection
The batch selection is well-structured and follows the minimal data selection pattern used throughout the presenter.
152-156
: Verify batch filtering integration
Let's verify that the batch friendlyId exposed here is properly integrated with the batch filtering functionality mentioned in the PR objectives.
Also applies to: 320-320
✅ Verification successful
Based on the shell script results, I can now provide a final response as we have sufficient evidence of the batch filtering integration:
Batch filtering integration is properly implemented
The batch.friendlyId exposed in SpanPresenter is correctly integrated with the batch filtering functionality:
- The API layer supports batch filtering through
filter[batch]
parameter in ApiRunListPresenter - The UI implements comprehensive batch filtering components (BatchFilters.tsx)
- The batch.friendlyId is properly used across the application:
- In API routes for filtering (
apps/webapp/app/presenters/v3/ApiRunListPresenter.server.ts
) - In UI components for display and navigation (
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx
) - In path building utilities (
apps/webapp/app/utils/pathBuilder.ts
)
- In API routes for filtering (
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for batch filtering implementation in related files
# Look for batch filter implementations in API routes or controllers
rg -A 5 "filter\[batch\]" --type ts
# Look for batch friendlyId usage in UI components
rg -A 5 "batch\.friendlyId" --type tsx
Length of output: 1619
Script:
#!/bin/bash
# Let's check for batch.friendlyId usage with proper file type and also look for batch filtering UI components
rg -A 5 "batch\.friendlyId"
# Look for batch filtering related components and hooks
rg -A 5 "useBatchFilter|BatchFilter"
# Check for any batch-related route parameters or filtering logic
rg -A 5 "batch.*filter|filter.*batch" --type ts
Length of output: 26013
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (2)
94-94
: Use absolute redirect path for error handling
The redirect path "/" may not correctly redirect the user to the intended location, especially if the application is deployed under a subdirectory.
110-111
:
Add loading state handling for useProject hook
The project data might not be immediately available. Add loading state handling:
const { batches, hasFilters, filters, pagination } = useTypedLoaderData<typeof loader>();
const project = useProject();
+
+ if (!project) {
+ return (
+ <PageContainer>
+ <div className="flex h-full items-center justify-center">
+ <Spinner />
+ </div>
+ </PageContainer>
+ );
+ }
Likely invalid or redundant comment.
apps/webapp/app/utils/pathBuilder.ts (2)
440-458
: LGTM! Consistent with existing patterns.
The new batch-related functions follow the established patterns in the codebase:
- Consistent use of v3 prefix
- Similar parameter types
- Similar function structure
440-458
: LGTM! Aligns with PR objectives.
The new path builder functions directly support the PR objectives:
v3BatchesPath
enables navigation to the new Batches pagev3BatchRunsPath
supports filtering runs by batch
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.v3.$projectParam.runs.$runParam.spans.$spanParam/route.tsx (2)
59-60
: LGTM: New batch-related path utilities
The new path utilities are correctly imported and follow the established naming conventions.
588-602
: LGTM: Well-implemented batch information display
The batch information UI element is well-implemented with:
- Proper conditional rendering
- Consistent use of existing UI components
- Good user experience with tooltips
- Clear navigation to batch details
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx
Show resolved
Hide resolved
export const loader = async ({ request, params }: LoaderFunctionArgs) => { | ||
const userId = await requireUserId(request); | ||
const { projectParam, organizationSlug } = ProjectParamSchema.parse(params); | ||
|
||
const url = new URL(request.url); | ||
const s = { | ||
cursor: url.searchParams.get("cursor") ?? undefined, | ||
direction: url.searchParams.get("direction") ?? undefined, | ||
environments: url.searchParams.getAll("environments"), | ||
statuses: url.searchParams.getAll("statuses"), | ||
period: url.searchParams.get("period") ?? undefined, | ||
from: url.searchParams.get("from") ?? undefined, | ||
to: url.searchParams.get("to") ?? undefined, | ||
id: url.searchParams.get("batchId") ?? undefined, | ||
}; | ||
const filters = BatchListFilters.parse(s); | ||
|
||
const project = await findProjectBySlug(organizationSlug, projectParam, userId); | ||
|
||
if (!project) { | ||
return redirectWithErrorMessage("/", request, "Project not found"); | ||
} | ||
|
||
const presenter = new BatchListPresenter(); | ||
const list = await presenter.call({ | ||
userId, | ||
projectId: project.id, | ||
...filters, | ||
friendlyId: filters.id, | ||
}); | ||
|
||
return typedjson(list); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for data fetching and validation
The loader function needs additional error handling in several areas:
- The filters parsing at line 89 could throw validation errors
- The presenter.call operation lacks error handling
Add comprehensive error handling:
- const filters = BatchListFilters.parse(s);
+ try {
+ const filters = BatchListFilters.parse(s);
+ } catch (error) {
+ return redirectWithErrorMessage(
+ request.url,
+ request,
+ "Invalid filter parameters"
+ );
+ }
const project = await findProjectBySlug(organizationSlug, projectParam, userId);
if (!project) {
return redirectWithErrorMessage("/", request, "Project not found");
}
const presenter = new BatchListPresenter();
- const list = await presenter.call({
- userId,
- projectId: project.id,
- ...filters,
- friendlyId: filters.id,
- });
+ try {
+ const list = await presenter.call({
+ userId,
+ projectId: project.id,
+ ...filters,
+ friendlyId: filters.id,
+ });
+ return typedjson(list);
+ } catch (error) {
+ console.error("Failed to fetch batches:", error);
+ return redirectWithErrorMessage(
+ request.url,
+ request,
+ "Failed to load batches"
+ );
+ }
- return typedjson(list);
Committable suggestion skipped: line range outside the PR's diff.
…nd also added a threshold, below which the processing of items is async
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (14)
apps/webapp/app/routes/api.v1.tasks.batch.ts (4)
26-28
: Optimize resource authorization functionThe current implementation creates a new Set and then converts it back to an array. This can be simplified using Array.from with a Set constructor directly.
- resource: (_, __, ___, body) => ({ - tasks: Array.from(new Set(body.items.map((i) => i.task))), - }), + resource: (_, __, ___, body) => ({ + tasks: Array.from(new Set(body.items.map((i) => i.task))), + }),
76-78
: Extract magic number to a named constantThe 30-day default expiration period should be extracted to a named constant for better maintainability and clarity.
+ const DEFAULT_IDEMPOTENCY_KEY_TTL_DAYS = 30; const idempotencyKeyExpiresAt = resolveIdempotencyKeyTTL(idempotencyKeyTTL) ?? - new Date(Date.now() + 24 * 60 * 60 * 1000 * 30); + new Date(Date.now() + 24 * 60 * 60 * 1000 * DEFAULT_IDEMPOTENCY_KEY_TTL_DAYS);
106-115
: Consider using a custom error type for batch-specific errorsCreate a dedicated error type for batch-specific errors to improve error handling clarity and maintainability.
class BatchTriggerError extends Error { constructor( message: string, public readonly code: string, public readonly statusCode: number = 422 ) { super(message); this.name = 'BatchTriggerError'; } }Then update the error handling:
- if (error instanceof ServiceValidationError) { + if (error instanceof BatchTriggerError) { + return json({ error: error.message, code: error.code }, { status: error.statusCode }); + } else if (error instanceof ServiceValidationError) { return json({ error: error.message }, { status: 422 });
139-143
: Make JWT configuration more flexibleThe JWT expiration time should be configurable through environment variables or configuration settings.
+ const JWT_EXPIRATION_TIME = env.JWT_EXPIRATION_TIME || "1h"; const jwt = await generateJWT({ secretKey: environment.apiKey, payload: claims, - expirationTime: "1h", + expirationTime: JWT_EXPIRATION_TIME, });apps/webapp/app/components/primitives/DateField.tsx (4)
1-1
: Remove unused importXMarkIcon
The
XMarkIcon
is imported but not used anywhere in the component. Remove it to keep the imports clean.-import { BellAlertIcon, XMarkIcon } from "@heroicons/react/20/solid"; +import { BellAlertIcon } from "@heroicons/react/20/solid";
42-43
: Consider memoizing the deviceTimezoneSince the timezone won't change during runtime, consider memoizing this value using
useMemo
or moving it outside the component.-const deviceTimezone = Intl.DateTimeFormat().resolvedOptions().timeZone; +const deviceTimezone = /* @once */ Intl.DateTimeFormat().resolvedOptions().timeZone;
Line range hint
84-92
: Add missing dependencies to useEffectThe useEffect hook is missing
state.value
andutc
in its dependency array. This could lead to stale values being used.useEffect(() => { if (state.value === undefined && defaultValue === undefined) return; const calendarDate = utc ? utcDateToCalendarDate(defaultValue) : dateToCalendarDate(defaultValue); //unchanged if (state.value?.toDate("utc").getTime() === defaultValue?.getTime()) { return; } setValue(calendarDate); - }, [defaultValue]); + }, [defaultValue, state.value, utc]);
Line range hint
183-206
: Add input validation and documentation to utility functionsThe date conversion utility functions would benefit from:
- JSDoc documentation explaining their purpose and parameters
- Input validation to handle invalid dates
- Consider combining into a single function with a timezone parameter
+/** + * Converts a UTC Date object to CalendarDateTime + * @param {Date} [date] - The date to convert + * @returns {CalendarDateTime | undefined} The converted date or undefined + */ function utcDateToCalendarDate(date?: Date) { + if (date && isNaN(date.getTime())) { + throw new Error('Invalid date provided'); + } return date ? new CalendarDateTime( date.getUTCFullYear(), date.getUTCMonth() + 1, date.getUTCDate(), date.getUTCHours(), date.getUTCMinutes(), date.getUTCSeconds() ) : undefined; } +/** + * Converts a local Date object to CalendarDateTime + * @param {Date} [date] - The date to convert + * @returns {CalendarDateTime | undefined} The converted date or undefined + */ function dateToCalendarDate(date?: Date) { + if (date && isNaN(date.getTime())) { + throw new Error('Invalid date provided'); + } return date ? new CalendarDateTime( date.getFullYear(), date.getMonth() + 1, date.getDate(), date.getHours(), date.getMinutes(), date.getSeconds() ) : undefined; }apps/webapp/app/v3/services/triggerTask.server.ts (3)
31-40
: Consider adding JSDoc comments for the new optionsThe new options enhance the service's functionality. Consider adding JSDoc comments to document their purpose and usage.
export type TriggerTaskServiceOptions = { idempotencyKey?: string; + /** Expiration date for the idempotency key. Defaults to 30 days if not specified. */ idempotencyKeyExpiresAt?: Date; triggerVersion?: string; traceContext?: Record<string, string | undefined>; spanParentAsLink?: boolean; parentAsLinkType?: "replay" | "trigger"; batchId?: string; customIcon?: string; + /** Custom run ID. If not provided, a new one will be generated. */ runId?: string; + /** If true, bypasses entitlement and queue size checks. Use with caution. */ skipChecks?: boolean; };
Line range hint
59-104
: Extract magic number to a named constantThe idempotency key handling logic is well-implemented. Consider extracting the 30-day expiration period to a named constant for better maintainability.
+const DEFAULT_IDEMPOTENCY_KEY_TTL_DAYS = 30; +const MILLISECONDS_PER_DAY = 24 * 60 * 60 * 1000; const idempotencyKeyExpiresAt = options.idempotencyKeyExpiresAt ?? resolveIdempotencyKeyTTL(body.options?.idempotencyKeyTTL) ?? - new Date(Date.now() + 24 * 60 * 60 * 1000 * 30); // 30 days + new Date(Date.now() + MILLISECONDS_PER_DAY * DEFAULT_IDEMPOTENCY_KEY_TTL_DAYS);
144-144
: Consider adding an index for version fieldsThe implementation looks good. Consider adding a composite index for the version tracking fields to optimize queries.
Consider adding the following index to your database schema:
CREATE INDEX idx_task_run_versions ON task_run(task_version, sdk_version, cli_version);Also applies to: 361-374
apps/webapp/app/components/runs/v3/BatchFilters.tsx (2)
822-828
: Consider using regex for batch ID validationWhile the current validation is good, using a regex pattern would make it more maintainable and potentially catch additional edge cases.
Apply this diff to improve the validation:
- if (!batchId.startsWith("batch_")) { - error = "Batch IDs start with 'batch_'"; - } else if (batchId.length !== 27) { - error = "Batch IDs are 27 characters long"; - } + const batchIdPattern = /^batch_[a-zA-Z0-9]{21}$/; + if (!batchIdPattern.test(batchId)) { + error = "Invalid batch ID format. Expected format: batch_ followed by 21 characters"; + }
767-780
: Remove unnecessary React FragmentThe Fragment is redundant as it contains only one child element.
Apply this diff to simplify the code:
- value={ - <> - {rangeType === "range" ? ( - <span> - <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} - <DateTime date={toDate!} includeTime includeSeconds /> - </span> - ) : rangeType === "from" ? ( - <DateTime date={fromDate!} includeTime includeSeconds /> - ) : ( - <DateTime date={toDate!} includeTime includeSeconds /> - )} - </> - } + value={ + rangeType === "range" ? ( + <span> + <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} + <DateTime date={toDate!} includeTime includeSeconds /> + </span> + ) : rangeType === "from" ? ( + <DateTime date={fromDate!} includeTime includeSeconds /> + ) : ( + <DateTime date={toDate!} includeTime includeSeconds /> + ) + }🧰 Tools
🪛 Biome (1.9.4)
[error] 768-780: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
apps/webapp/app/components/runs/v3/RunFilters.tsx (1)
169-173
: Consider grouping related filter types.The filter types could be organized better by grouping related items:
- ID-based filters (Run ID, Batch ID, Schedule ID)
- Time-based filters (Created, Custom date range)
- Other filters (Status, Environment, Tasks, Tags, Bulk action)
Consider organizing the filter types array with visual separators or grouping:
const filterTypes = [ // Status & Environment { name: "statuses", title: "Status", icon: ( <div className="flex size-4 items-center justify-center"> <div className="size-3 rounded-full border-2 border-text-dimmed" /> </div> ), }, { name: "environments", title: "Environment", icon: <CpuChipIcon className="size-4" /> }, // Tasks & Tags { name: "tasks", title: "Tasks", icon: <TaskIcon className="size-4" /> }, { name: "tags", title: "Tags", icon: <TagIcon className="size-4" /> }, // Time-based filters { name: "created", title: "Created", icon: <CalendarIcon className="size-4" /> }, { name: "daterange", title: "Custom date range", icon: <CalendarIcon className="size-4" /> }, // ID-based filters { name: "run", title: "Run ID", icon: <FingerPrintIcon className="size-4" /> }, { name: "batch", title: "Batch ID", icon: <Squares2X2Icon className="size-4" /> }, { name: "schedule", title: "Schedule ID", icon: <ClockIcon className="size-4" /> }, // Other { name: "bulk", title: "Bulk action", icon: <ListChecks className="size-4" /> }, ] as const;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (11)
apps/webapp/app/components/primitives/DateField.tsx
(8 hunks)apps/webapp/app/components/runs/TimeFrameFilter.tsx
(2 hunks)apps/webapp/app/components/runs/v3/BatchFilters.tsx
(1 hunks)apps/webapp/app/components/runs/v3/RunFilters.tsx
(13 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.test.tasks.$taskParam/route.tsx
(2 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.v3.usage/route.tsx
(1 hunks)apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
(4 hunks)apps/webapp/app/routes/api.v1.tasks.batch.ts
(1 hunks)apps/webapp/app/v3/services/triggerTask.server.ts
(8 hunks)internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241120161205_add_new_unique_index_on_batch_task_run/migration.sql
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- internal-packages/database/prisma/migrations/20241120161205_add_new_unique_index_on_batch_task_run/migration.sql
🚧 Files skipped from review as they are similar to previous changes (5)
- apps/webapp/app/components/runs/TimeFrameFilter.tsx
- apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.test.tasks.$taskParam/route.tsx
- apps/webapp/app/routes/_app.orgs.$organizationSlug.v3.usage/route.tsx
- apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
- internal-packages/database/prisma/migrations/20241120161204_modify_batch_task_run_for_improvements/migration.sql
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
[error] 768-780: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 916-916: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
apps/webapp/app/components/runs/v3/RunFilters.tsx
[error] 1106-1118: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 1516-1516: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
🔇 Additional comments (11)
apps/webapp/app/components/primitives/DateField.tsx (1)
Line range hint 44-176
: Implementation looks solid with good timezone handling
The DateField component's implementation of UTC support is well-structured and aligns with the PR objectives. The component correctly handles both UTC and local dates, with proper timezone conversions throughout the lifecycle methods.
apps/webapp/app/v3/services/triggerTask.server.ts (3)
15-15
: LGTM! Import name change improves clarity
The renamed import better reflects the function's specific purpose of uploading packets.
650-650
: LGTM! Function call updated consistently
The function call has been updated to match the renamed import.
107-131
: Consider restricting skipChecks usage
While the implementation is solid, the skipChecks
option could be dangerous if misused. Consider:
- Adding a warning log when checks are skipped
- Restricting this option to specific environments or user roles
apps/webapp/app/components/runs/v3/BatchFilters.tsx (3)
76-91
: Ensure consistent parameter naming in schema
The schema uses id
while the component uses batchId
in URL parameters. This inconsistency can lead to filters not functioning correctly.
Apply this diff to fix the inconsistency:
- id: z.string().optional(),
+ batchId: z.string().optional(),
1-933
: Well-structured implementation of batch filtering
The implementation is clean, modular, and aligns well with the PR objectives. The code demonstrates good practices in React component design, state management, and error handling.
🧰 Tools
🪛 Biome (1.9.4)
[error] 768-780: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 916-916: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
911-921
:
Use Number.isNaN instead of global isNaN
The global isNaN
function performs type coercion which can lead to unexpected results. Use Number.isNaN
for safer number validation.
Apply this diff to fix the issue:
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
Likely invalid or redundant comment.
🧰 Tools
🪛 Biome (1.9.4)
[error] 916-916: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
apps/webapp/app/components/runs/v3/RunFilters.tsx (4)
96-99
: LGTM: Schema changes align with PR objectives.
The new properties added to TaskRunListSearchFilters
schema appropriately support the enhanced filtering capabilities.
1447-1447
: Correct the Label to 'Schedule ID' in ScheduleIdDropdown
The Label
component in the ScheduleIdDropdown
function incorrectly displays 'Batch ID'. It should be updated to 'Schedule ID' to reflect the proper context.
Apply this diff to fix the label:
- <Label>Batch ID</Label>
+ <Label>Schedule ID</Label>
1423-1425
: Ensure consistent schedule ID prefix check and error message
The scheduleId
validation checks if it starts with 'sched'
, but the error message states that schedule IDs start with 'sched_'
. To avoid confusion, update the check to match the error message.
Apply this diff to correct the prefix check:
- if (!scheduleId.startsWith("sched")) {
+ if (!scheduleId.startsWith("sched_")) {
1511-1521
: 🛠️ Refactor suggestion
Use Number.isNaN
instead of isNaN
for safe type checking
Using the global isNaN
function can lead to unexpected results due to type coercion. It's safer to use Number.isNaN
, which doesn't perform type coercion.
Apply this diff to update the function:
function dateFromString(value: string | undefined | null): Date | undefined {
if (!value) return;
//is it an int?
const int = parseInt(value);
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
return new Date(int);
}
return new Date(value);
}
Likely invalid or redundant comment.
🧰 Tools
🪛 Biome (1.9.4)
[error] 1516-1516: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
if (!body.items.length) { | ||
return json({ error: "Batch cannot be triggered with no items" }, { status: 400 }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance request validation
Consider adding validation for individual items in the batch to ensure they contain required fields and valid data before processing.
if (!body.items.length) {
return json({ error: "Batch cannot be triggered with no items" }, { status: 400 });
}
+
+ // Validate individual items
+ const invalidItems = body.items.filter(item => !item.task || typeof item.task !== 'string');
+ if (invalidItems.length > 0) {
+ return json({ error: "Invalid items found in batch" }, { status: 400 });
+ }
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
apps/webapp/app/components/runs/v3/RunFilters.tsx (1)
1108-1120
: Remove unnecessary Fragment wrapperThe Fragment is redundant as it contains only one child.
Apply this diff to simplify the code:
- value={ - <> - {rangeType === "range" ? ( - <span> - <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} - <DateTime date={toDate!} includeTime includeSeconds /> - </span> - ) : rangeType === "from" ? ( - <DateTime date={fromDate!} includeTime includeSeconds /> - ) : ( - <DateTime date={toDate!} includeTime includeSeconds /> - )} - </> - } + value={ + rangeType === "range" ? ( + <span> + <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} + <DateTime date={toDate!} includeTime includeSeconds /> + </span> + ) : rangeType === "from" ? ( + <DateTime date={fromDate!} includeTime includeSeconds /> + ) : ( + <DateTime date={toDate!} includeTime includeSeconds /> + ) + }🧰 Tools
🪛 Biome (1.9.4)
[error] 1108-1120: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/webapp/app/components/runs/v3/RunFilters.tsx
(14 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/RunFilters.tsx
[error] 1108-1120: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 1518-1518: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
🔇 Additional comments (5)
apps/webapp/app/components/runs/v3/RunFilters.tsx (5)
96-99
: LGTM: Schema changes align with PR objectives.
The new properties added to TaskRunListSearchFilters
schema appropriately support the enhanced batch triggering functionality.
1133-1157
: LGTM: Well-implemented toggle with proper state management.
The ShowChildTasksToggle
component correctly:
- Handles state management using URL parameters
- Automatically disables when incompatible filters are active
- Provides clear user feedback
1449-1449
: Correct the Label to 'Schedule ID' in ScheduleIdDropdown
The Label
component incorrectly displays 'Batch ID' instead of 'Schedule ID'.
1425-1427
: Ensure consistent schedule ID prefix check and error message
The validation check uses 'sched'
but the error message states 'sched_'
.
1513-1523
: 🛠️ Refactor suggestion
Use Number.isNaN
for safer type checking
The isNaN
function performs type coercion which can lead to unexpected results.
Apply this diff to use the safer Number.isNaN
:
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
Likely invalid or redundant comment.
🧰 Tools
🪛 Biome (1.9.4)
[error] 1518-1518: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
apps/webapp/app/components/runs/v3/BatchFilters.tsx (1)
738-750
: Remove unnecessary React FragmentThe Fragment (
<>
and</>
) wrapping the value is unnecessary since it contains only a single child expression. Removing it can simplify the code.Apply this diff to remove the unnecessary Fragment:
- value={ - <> - {rangeType === "range" ? ( - <span> - <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} - <DateTime date={toDate!} includeTime includeSeconds /> - </span> - ) : rangeType === "from" ? ( - <DateTime date={fromDate!} includeTime includeSeconds /> - ) : ( - <DateTime date={toDate!} includeTime includeSeconds /> - )} - </> - } + value={ + rangeType === "range" ? ( + <span> + <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} + <DateTime date={toDate!} includeTime includeSeconds /> + </span> + ) : rangeType === "from" ? ( + <DateTime date={fromDate!} includeTime includeSeconds /> + ) : ( + <DateTime date={toDate!} includeTime includeSeconds /> + ) + }🧰 Tools
🪛 Biome (1.9.4)
[error] 738-750: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
[error] 738-750: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 886-886: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
🔇 Additional comments (2)
apps/webapp/app/components/runs/v3/BatchFilters.tsx (2)
58-58
: Ensure consistent use of 'id' vs 'batchId' in filters
The BatchListFilters
schema uses id
as the parameter name, while elsewhere in the code (e.g., in BatchIdDropdown
and search parameters), batchId
is used. This inconsistency can lead to filters not functioning correctly.
Apply these diffs to resolve the issue:
In BatchListFilters
schema at line 58:
- id: z.string().optional(),
+ batchId: z.string().optional(),
In BatchFilters
component at line 80:
- searchParams.has("id") ||
+ searchParams.has("batchId") ||
Also applies to: 80-80
886-886
: Use Number.isNaN
instead of global isNaN
Using the global isNaN
function can lead to unexpected results due to type coercion. Number.isNaN
is safer as it doesn't coerce its argument.
Apply this diff to fix the issue:
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
🧰 Tools
🪛 Biome (1.9.4)
[error] 886-886: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (7)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (4)
46-55
: Enhance type safety for URL parameters parsingConsider using a dedicated URL parameter parsing library or implementing stronger type checking for the URL parameters.
+ const searchParamsSchema = z.object({ + cursor: z.string().optional(), + direction: z.string().optional(), + environments: z.array(z.string()), + statuses: z.array(z.string()), + period: z.string().optional(), + from: z.string().optional(), + to: z.string().optional(), + id: z.string().optional(), + }); const s = { cursor: url.searchParams.get("cursor") ?? undefined, direction: url.searchParams.get("direction") ?? undefined, environments: url.searchParams.getAll("environments"), statuses: url.searchParams.getAll("statuses"), period: url.searchParams.get("period") ?? undefined, from: url.searchParams.get("from") ?? undefined, to: url.searchParams.get("to") ?? undefined, id: url.searchParams.get("id") ?? undefined, }; + const validatedParams = searchParamsSchema.parse(s);
183-194
: Improve legacy batch version handlingThe current implementation of legacy batch version handling could be more maintainable.
Consider extracting the legacy batch version handling into a separate component:
function LegacyBatchStatus() { return ( <SimpleTooltip content="Upgrade to the latest SDK for batch statuses to appear." disableHoverableContent button={ <span className="flex items-center gap-1"> <ExclamationCircleIcon className="size-4 text-slate-500" /> <span>Legacy batch</span> </span> } /> ); }
171-221
: Optimize table rendering performanceThe table mapping function could benefit from performance optimizations.
Consider these improvements:
- Memoize the path generation
- Use a memo hook for complex calculations
- Extract row rendering to a separate component
const BatchTableRow = memo(({ batch, organization, project }: BatchRowProps) => { const path = useMemo( () => v3BatchRunsPath(organization, project, batch), [organization.slug, project.slug, batch.id] ); return ( <TableRow> {/* ... row content ... */} </TableRow> ); });
204-217
: Ensure consistent date formattingThe component uses different approaches for date formatting between duration and timestamps.
Consider creating a unified date formatting utility:
const DateDisplay = { duration: (start: Date, end: Date) => formatDuration(start, end, { style: "short" }), timestamp: (date: Date) => <DateTime date={date} />, optional: (date: Date | null) => (date ? <DateTime date={date} /> : "–") };apps/webapp/app/components/runs/v3/BatchFilters.tsx (3)
731-743
: Remove unnecessary FragmentThe Fragment wrapper is redundant as it contains only conditional expressions.
- value={ - <> - {rangeType === "range" ? ( - <span> - <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} - <DateTime date={toDate!} includeTime includeSeconds /> - </span> - ) : rangeType === "from" ? ( - <DateTime date={fromDate!} includeTime includeSeconds /> - ) : ( - <DateTime date={toDate!} includeTime includeSeconds /> - )} - </> - } + value={ + rangeType === "range" ? ( + <span> + <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} + <DateTime date={toDate!} includeTime includeSeconds /> + </span> + ) : rangeType === "from" ? ( + <DateTime date={fromDate!} includeTime includeSeconds /> + ) : ( + <DateTime date={toDate!} includeTime includeSeconds /> + ) + }🧰 Tools
🪛 Biome (1.9.4)
[error] 731-743: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
785-791
: Extract batch ID validation logicConsider extracting the batch ID validation logic into a reusable utility function. This would make it easier to maintain and reuse the validation rules across the application.
+const validateBatchId = (batchId: string): string | undefined => { + if (!batchId.startsWith("batch_")) { + return "Batch IDs start with 'batch_'"; + } + if (batchId.length !== 27) { + return "Batch IDs are 27 characters long"; + } + return undefined; +}; - let error: string | undefined = undefined; - if (batchId) { - if (!batchId.startsWith("batch_")) { - error = "Batch IDs start with 'batch_'"; - } else if (batchId.length !== 27) { - error = "Batch IDs are 27 characters long"; - } - } + const error = batchId ? validateBatchId(batchId) : undefined;
446-487
: Add type definition for time periodsConsider adding a type definition for time periods to improve type safety and maintainability.
+type TimePeriod = { + label: string; + value: string; +}; -const timePeriods = [ +const timePeriods: TimePeriod[] = [
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
(1 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx
(1 hunks)apps/webapp/app/utils/pathBuilder.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
[error] 731-743: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 879-879: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
🔇 Additional comments (6)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx (2)
41-73
: Add comprehensive error handling to the loader function
The loader function needs error handling for several critical operations:
- URL parameter parsing
- Filters parsing
- Presenter call
This issue was previously identified. Adding error handling will improve the robustness of the loader function.
75-114
: Add error boundary and improve loading state handling
The Page component could benefit from better error handling and loading states.
This issue was previously identified. Adding an error boundary and proper loading states will improve the user experience.
apps/webapp/app/utils/pathBuilder.ts (2)
440-442
: LGTM!
The function follows the established pattern for listing resources.
444-450
: 🛠️ Refactor suggestion
Consider using path parameters instead of query parameters for batch identification.
The implementation uses query parameters (?id=
and ?batchId=
) for batch identification, which:
- Deviates from RESTful practices
- Is inconsistent with other similar functions in this file
- Uses inconsistent parameter names (
id
vsbatchId
) for the same concept
Apply this diff to use path parameters consistently:
export function v3BatchPath(
organization: OrgForPath,
project: ProjectForPath,
batch: { friendlyId: string }
) {
- return `${v3ProjectPath(organization, project)}/batches?id=${batch.friendlyId}`;
+ return `${v3ProjectPath(organization, project)}/batches/${batch.friendlyId}`;
}
export function v3BatchRunsPath(
organization: OrgForPath,
project: ProjectForPath,
batch: { friendlyId: string }
) {
- return `${v3ProjectPath(organization, project)}/runs?batchId=${batch.friendlyId}`;
+ return `${v3ProjectPath(organization, project)}/batches/${batch.friendlyId}/runs`;
}
Also applies to: 452-458
apps/webapp/app/components/runs/v3/BatchFilters.tsx (2)
46-61
: Maintain consistent parameter naming
The schema uses id
for the batch identifier, while the UI components use batchId
. This inconsistency was previously flagged and should be addressed for better maintainability.
- id: z.string().optional(),
+ batchId: z.string().optional(),
874-884
: 🛠️ Refactor suggestion
Use Number.isNaN
instead of global isNaN
The global isNaN
function performs type coercion which can lead to unexpected results. Use Number.isNaN
for more predictable type checking.
- if (!isNaN(int)) {
+ if (!Number.isNaN(int)) {
Likely invalid or redundant comment.
🧰 Tools
🪛 Biome (1.9.4)
[error] 879-879: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (10)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (3)
90-101
: Use lowercase 'bigint' type instead of 'BigInt'For consistency with TypeScript conventions, use the lowercase primitive type.
- runCount: BigInt; + runCount: bigint;🧰 Tools
🪛 Biome (1.9.4)
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead(lint/complexity/noBannedTypes)
127-131
: Consider making batch version filter configurableThe batch version filter
<> 'v1'
is hardcoded in the SQL query. Consider making this configurable or moving it to a constant to improve maintainability.+const EXCLUDED_BATCH_VERSION = 'v1'; ${ statuses && statuses.length > 0 ? Prisma.sql`AND b.status = ANY(ARRAY[${Prisma.join( statuses - )}]::"BatchTaskRunStatus"[]) AND b."batchVersion" <> 'v1'` + )}]::"BatchTaskRunStatus"[]) AND b."batchVersion" <> ${EXCLUDED_BATCH_VERSION}` : Prisma.empty }
182-184
: Improve error handling for missing environmentConsider providing more context in the error message to help with debugging.
- throw new Error(`Environment not found for Batch ${batch.id}`); + throw new Error( + `Environment ${batch.runtimeEnvironmentId} not found for Batch ${batch.id} in Project ${projectId}` + );apps/webapp/app/components/runs/v3/BatchFilters.tsx (4)
55-62
: Consider extracting array preprocessing logicThe preprocessing logic for 'environments' and 'statuses' is duplicated. Consider extracting it into a reusable utility function.
+ const preprocessStringArray = (value: unknown) => + typeof value === "string" ? [value] : value; environments: z.preprocess( - (value) => (typeof value === "string" ? [value] : value), + preprocessStringArray, z.string().array().optional() ), statuses: z.preprocess( - (value) => (typeof value === "string" ? [value] : value), + preprocessStringArray, BatchStatus.array().optional() ),
96-101
: Specify form method for clear all actionThe Form component should specify a method for the clear all action.
- <Form className="h-6"> + <Form method="get" className="h-6">
230-233
: Consider memoizing statuses arrayThe statuses array is recreated on every render. Consider memoizing it with useMemo.
-const statuses = allBatchStatuses.map((status) => ({ +const statuses = useMemo(() => allBatchStatuses.map((status) => ({ title: batchStatusTitle(status), value: status, -})); +})), []);
358-365
: Memoize error computationThe error computation should be memoized to prevent unnecessary recalculations.
- let error: string | undefined = undefined; - if (batchId) { - if (!batchId.startsWith("batch_")) { - error = "Batch IDs start with 'batch_'"; - } else if (batchId.length !== 27) { - error = "Batch IDs are 27 characters long"; - } - } + const error = useMemo(() => { + if (!batchId) return undefined; + if (!batchId.startsWith("batch_")) return "Batch IDs start with 'batch_'"; + if (batchId.length !== 27) return "Batch IDs are 27 characters long"; + return undefined; + }, [batchId]);apps/webapp/app/components/runs/v3/RunFilters.tsx (2)
175-179
: Consider filtering out daterange at the sourceInstead of filtering out daterange in the MainMenu component, consider moving this logic to the filter types definition by making it a hidden filter type.
{ name: "created", title: "Created", icon: <CalendarIcon className="size-4" /> }, - { name: "daterange", title: "Custom date range", icon: <CalendarIcon className="size-4" /> }, + { name: "daterange", title: "Custom date range", icon: <CalendarIcon className="size-4" />, hidden: true },
707-731
: Add tooltip to explain disabled stateWhen the toggle is disabled due to batch/run/schedule ID being present, users might not understand why. Consider adding a tooltip to explain this.
return ( + <TooltipProvider> + <Tooltip> + <TooltipTrigger> <Switch disabled={disabled} variant="small" label="Show child runs" checked={disabled ? true : showChildTasks} onCheckedChange={(checked) => { replace({ showChildTasks: checked ? "true" : undefined, }); }} /> + </TooltipTrigger> + <TooltipContent> + {disabled ? "Always showing child runs when filtering by ID" : "Toggle visibility of child runs"} + </TooltipContent> + </Tooltip> + </TooltipProvider> );apps/webapp/app/components/runs/v3/SharedFilters.tsx (1)
427-447
: Remove unnecessary Fragment inAppliedCustomDateRangeFilter
The Fragment (
<></>
) wrapping thevalue
prop inAppliedFilter
is unnecessary since it contains only one child. Removing it simplifies the code without affecting functionality.Apply this diff to remove the unnecessary Fragment:
- value={ - <> - {rangeType === "range" ? ( - <span> - <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} - <DateTime date={toDate!} includeTime includeSeconds /> - </span> - ) : rangeType === "from" ? ( - <DateTime date={fromDate!} includeTime includeSeconds /> - ) : ( - <DateTime date={toDate!} includeTime includeSeconds /> - )} - </> - } + value={ + rangeType === "range" ? ( + <span> + <DateTime date={fromDate!} includeTime includeSeconds /> –{" "} + <DateTime date={toDate!} includeTime includeSeconds /> + </span> + ) : rangeType === "from" ? ( + <DateTime date={fromDate!} includeTime includeSeconds /> + ) : ( + <DateTime date={toDate!} includeTime includeSeconds /> + ) + }🧰 Tools
🪛 Biome (1.9.4)
[error] 435-447: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment(lint/complexity/noUselessFragments)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
apps/webapp/app/components/runs/v3/BatchFilters.tsx
(1 hunks)apps/webapp/app/components/runs/v3/RunFilters.tsx
(13 hunks)apps/webapp/app/components/runs/v3/SharedFilters.tsx
(1 hunks)apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/components/runs/v3/SharedFilters.tsx
[error] 435-447: Avoid using unnecessary Fragment.
A fragment is redundant if it contains only one child, or if it is the child of a html element, and is not a keyed fragment.
Unsafe fix: Remove the Fragment
(lint/complexity/noUselessFragments)
[error] 477-477: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.
See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.
(lint/suspicious/noGlobalIsNan)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
[error] 98-98: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
🔇 Additional comments (10)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (3)
8-22
: Add validation for pageSize parameter
The pageSize
parameter should be validated to ensure it's a positive number to prevent potential issues with the SQL LIMIT clause.
87-87
: Handle potential undefined value from 'parse' function
The parse
function may return null
or undefined
if the period
string is invalid.
55-80
: Restrict project access to the user's organization
The project query should be scoped to the user's organization to prevent unauthorized access.
apps/webapp/app/components/runs/v3/BatchFilters.tsx (4)
126-158
: LGTM! Well-implemented filter menu
The implementation has good accessibility support through Ariakit and includes keyboard shortcuts.
52-67
:
Maintain consistent parameter naming
The schema uses 'id' while the rest of the codebase uses 'batchId'. This inconsistency should be addressed.
Apply this diff to fix the inconsistency:
- id: z.string().optional(),
+ batchId: z.string().optional(),
83-89
:
Update parameter name in hasFilters check
The hasFilters check should use 'batchId' instead of 'id' to maintain consistency with the schema change.
const hasFilters =
searchParams.has("statuses") ||
searchParams.has("environments") ||
- searchParams.has("id") ||
+ searchParams.has("batchId") ||
searchParams.has("period") ||
searchParams.has("from") ||
searchParams.has("to");
Likely invalid or redundant comment.
358-365
: 🛠️ Refactor suggestion
Move batch ID validation to schema level
The batch ID validation logic should be part of the zod schema for consistent validation across the application.
Update the schema definition:
- id: z.string().optional(),
+ batchId: z
+ .string()
+ .regex(/^batch_[a-zA-Z0-9]{21}$/, {
+ message: "Batch IDs must start with 'batch_' and be 27 characters long",
+ })
+ .optional(),
Likely invalid or redundant comment.
apps/webapp/app/components/runs/v3/RunFilters.tsx (3)
102-105
: LGTM: Schema changes are well-structured
The new fields follow the existing schema pattern and are properly typed as optional.
1023-1023
:
Fix incorrect label in ScheduleIdDropdown
The label incorrectly shows "Batch ID" instead of "Schedule ID".
- <Label>Batch ID</Label>
+ <Label>Schedule ID</Label>
Likely invalid or redundant comment.
999-1001
:
Fix inconsistent schedule ID prefix check
The prefix check doesn't match the error message:
- Check uses
startsWith("sched")
- Error message states IDs start with
"sched_"
- if (!scheduleId.startsWith("sched")) {
+ if (!scheduleId.startsWith("sched_")) {
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
apps/webapp/app/components/runs/v3/RunFilters.tsx (3)
178-182
: Consider grouping related filter typesThe filter types could be organized better by grouping related items together (e.g., all ID-related filters).
const filterTypes = [ { name: "statuses", title: "Status", icon: (...) }, { name: "environments", title: "Environment", icon: (...) }, { name: "tasks", title: "Tasks", icon: (...) }, { name: "tags", title: "Tags", icon: (...) }, { name: "created", title: "Created", icon: (...) }, { name: "daterange", title: "Custom date range", icon: (...) }, + // ID-related filters { name: "run", title: "Run ID", icon: (...) }, { name: "batch", title: "Batch ID", icon: (...) }, { name: "schedule", title: "Schedule ID", icon: (...) }, { name: "bulk", title: "Bulk action", icon: (...) },
710-734
: Add JSDoc comment explaining the toggle behaviorThe toggle's behavior with ID filters should be documented for better maintainability.
+/** + * Toggle component for showing/hiding child runs. + * Automatically shows and disables the toggle when any ID filter (batch, run, schedule) is active, + * as these filters inherently require showing child runs. + */ function ShowChildTasksToggle() {
Line range hint
736-1087
: Consider extracting common ID dropdown logicThe Run, Batch, and Schedule ID dropdowns share significant code patterns. Consider creating a reusable component to reduce duplication.
Example implementation:
type IdDropdownProps = { type: 'run' | 'batch' | 'schedule'; prefix: string; length: number; width: string; // ... other common props }; function IdDropdown({ type, prefix, length, width, ...props }: IdDropdownProps) { // Common implementation }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/webapp/app/components/runs/v3/RunFilters.tsx
(13 hunks)
🔇 Additional comments (3)
apps/webapp/app/components/runs/v3/RunFilters.tsx (3)
102-105
: LGTM: Schema updates align with new filtering capabilities
The new schema fields are properly typed and marked as optional, maintaining consistency with existing patterns.
1026-1026
:
Fix incorrect label in ScheduleIdDropdown
The label incorrectly shows "Batch ID" instead of "Schedule ID".
- <Label>Batch ID</Label>
+ <Label>Schedule ID</Label>
Likely invalid or redundant comment.
1002-1004
:
Fix inconsistent schedule ID validation
The validation check doesn't match the error message regarding the prefix.
- if (!scheduleId.startsWith("sched")) {
+ if (!scheduleId.startsWith("sched_")) {
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (1)
96-106
: Use lowercase 'bigint' type instead of 'BigInt'For consistency with TypeScript primitives, use lowercase 'bigint' instead of 'BigInt'.
- runCount: BigInt; + runCount: bigint;🧰 Tools
🪛 Biome (1.9.4)
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead(lint/complexity/noBannedTypes)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
🔇 Additional comments (7)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (7)
30-43
: LGTM! Well-structured class and method declaration
The class extends BasePresenter appropriately, and the method parameters are well-organized with sensible defaults.
92-93
: Handle invalid period parsing
The parse
function may return null
for invalid period values, which could lead to unexpected behavior.
94-152
: LGTM! Well-constructed SQL query
The query is well-structured with proper SQL injection prevention using Prisma.sql tagged templates.
🧰 Tools
🪛 Biome (1.9.4)
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
160-163
: Add validation for cursor values
The cursor access using optional chaining could be made more robust with explicit checks.
181-220
: LGTM! Thorough result transformation
The transformation logic is well-implemented with appropriate error handling and type conversions.
55-80
:
Add organization-level access control to project query
The project query should be scoped to the user's organization to prevent unauthorized access.
where: {
id: projectId,
+ organization: {
+ members: {
+ some: {
+ userId: userId,
+ },
+ },
+ },
},
Likely invalid or redundant comment.
1-220
: Verify security implications of batch listing
Let's verify the usage of this presenter and potential security implications.
✅ Verification successful
Let me gather more information about the access control and route implementation.
Let me try one more verification with corrected paths:
Let me check one final aspect about the base presenter and route structure:
Based on the gathered information, I can now provide the final response:
BatchListPresenter implementation is secure with appropriate access controls
The implementation has several security measures in place:
- The BasePresenter includes proper tracing and error handling
- The displayableEnvironment function properly handles user context and masks sensitive information
- User authentication is enforced through the requireUser middleware
- Environment data exposure is controlled, showing usernames only when necessary
- The presenter uses read replicas appropriately for queries
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential security issues in batch listing implementation
# Check for direct usage of BatchListPresenter without proper access control
rg -A 5 "new BatchListPresenter"
# Check for routes using this presenter
rg -A 10 "BatchListPresenter.*call\("
# Check for potential exposure of sensitive data in environment details
ast-grep --pattern 'displayableEnvironment($_, $_)'
Length of output: 2298
Script:
#!/bin/bash
# Check the route implementation for access control
cat "apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx"
# Check the displayableEnvironment implementation
rg -g "runtimeEnvironment.server.ts" "export function displayableEnvironment"
# Check BasePresenter for any security measures
ast-grep --pattern 'class BasePresenter {
$$$
}'
Length of output: 390
Script:
#!/bin/bash
# Check the route implementation for access control with the correct path
cat "apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.batches/route.tsx"
# Check the BasePresenter implementation
rg -g "basePresenter.server.ts" "class BasePresenter"
# Check displayableEnvironment implementation
cat "apps/webapp/app/models/runtimeEnvironment.server.ts"
Length of output: 3874
Script:
#!/bin/bash
# Check the BasePresenter implementation
cat "apps/webapp/app/presenters/v3/basePresenter.server.ts"
# Try to find the route file with correct path using fd
fd -t f "route.tsx" apps/webapp/app/routes/_app.orgs
# Check for any middleware or auth guards
rg -g "*.server.ts" "requireUser|requireOrganization|requireProject"
Length of output: 2459
🧰 Tools
🪛 Biome (1.9.4)
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (2)
75-78
: Consider refactoring filter conditions for better maintainabilityWhile the implementation is correct, consider refactoring the filter conditions into a more maintainable structure to reduce the complexity of the boolean expression.
- const hasFilters = - (tasks !== undefined && tasks.length > 0) || - (versions !== undefined && versions.length > 0) || - hasStatusFilters || - (environments !== undefined && environments.length > 0) || - (period !== undefined && period !== "all") || - (bulkId !== undefined && bulkId !== "") || - from !== undefined || - to !== undefined || - (scheduleId !== undefined && scheduleId !== "") || - (tags !== undefined && tags.length > 0) || - batchId !== undefined || - runId !== undefined || - typeof isTest === "boolean" || - rootOnly === true; + const filterConditions = { + tasks: tasks?.length > 0, + versions: versions?.length > 0, + status: hasStatusFilters, + environments: environments?.length > 0, + period: period !== undefined && period !== "all", + bulkId: bulkId !== undefined && bulkId !== "", + dateRange: from !== undefined || to !== undefined, + scheduleId: scheduleId !== undefined && scheduleId !== "", + tags: tags?.length > 0, + batch: batchId !== undefined, + run: runId !== undefined, + test: typeof isTest === "boolean", + root: rootOnly === true + }; + + const hasFilters = Object.values(filterConditions).some(Boolean);
181-181
: Remove unnecessary optional chainingThe optional chaining operator is redundant here since
schedule
is already checked in the if condition.- scheduleId = schedule?.id; + scheduleId = schedule.id;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts
(9 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts
[error] 170-170: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (4)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (4)
25-27
: LGTM: Type definitions are clear and well-structured
The new optional properties rootOnly
, batchId
, and runId
are properly typed and follow the existing naming conventions.
186-188
: Verify the impact of overriding rootOnly
The current implementation forces rootOnly
to false
when any of these IDs are present, which might override an explicit user preference. Consider preserving the user's choice when explicitly set.
301-303
: Handle NULL values in runTags when filtering
The array overlap operator (&&
) with NULL may lead to incorrect filtering results.
374-374
: LGTM: Proper handling of tags in response mapping
The implementation correctly handles null tags and provides consistent sorting, ensuring a predictable response format.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@ericallam do we need to make all the object store env vars required now? |
ccc6ae5
to
69f3dca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (13)
references/v3-catalog/src/utils/types.ts (2)
1-3
: Consider consolidating duplicate type definitions
Expect<T>
andExpectTrue<T>
serve the same purpose. Consider keeping just one of them to reduce redundancy and improve maintainability.export type Expect<T extends true> = T; -export type ExpectTrue<T extends true> = T; export type ExpectFalse<T extends false> = T;
Also, consider adding JSDoc comments to explain the purpose of these utility types for better documentation.
+/** Type-level assertion that T must be true */ export type Expect<T extends true> = T; +/** Type-level assertion that T must be false */ export type ExpectFalse<T extends false> = T;
5-8
: LGTM! Consider adding documentation for complex type logicThe implementation of
Equal
andNotEqual
types is correct and follows TypeScript best practices for type-level comparison. The function type inference trick used here is a reliable way to compare types.Consider adding detailed JSDoc comments to explain the implementation:
+/** + * Compares two types for equality using function type inference. + * @template X - First type to compare + * @template Y - Second type to compare + * @returns true if types are equal, false otherwise + */ export type Equal<X, Y> = (<T>() => T extends X ? 1 : 2) extends <T>() => T extends Y ? 1 : 2 ? true : false; +/** + * Checks if two types are not equal using the Equal type. + * @template X - First type to compare + * @template Y - Second type to compare + * @returns true if types are not equal, false otherwise + */ export type NotEqual<X, Y> = true extends Equal<X, Y> ? false : true;references/v3-catalog/trigger.config.ts (1)
23-23
: Document the development environment impactConsider adding a comment explaining why retries are enabled in development and its implications for developers (e.g., longer wait times, increased resource usage).
retries: { - enabledInDev: true, + // Enabled to support testing of batch processing features + // Note: This may increase task execution times during development + enabledInDev: true, default: {apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (2)
8-22
: Consider adding input validation for filter parametersThe
BatchListOptions
type could benefit from stronger type constraints:export type BatchListOptions = { userId?: string; projectId: string; //filters friendlyId?: string; statuses?: BatchTaskRunStatus[]; - environments?: string[]; + environments?: Array<string & { __brand: 'EnvironmentId' }>; period?: string; - from?: number; - to?: number; + from?: number & { __brand: 'UnixTimestamp' }; + to?: number & { __brand: 'UnixTimestamp' }; //pagination direction?: Direction; cursor?: string; - pageSize?: number; + pageSize?: number & { __brand: 'PositiveNumber' }; };
95-152
: Consider using a query builder for complex SQLThe raw SQL query is complex and could benefit from using a query builder for better maintainability.
Consider using Prisma's query builder instead of raw SQL:
const query = this._replica.batchTaskRun.findMany({ where: { runtimeEnvironmentId: { in: environmentIds }, ...(cursor && { id: direction === "forward" ? { lt: cursor } : { gt: cursor }, }), ...(friendlyId && { friendlyId }), ...(statuses?.length && { AND: [ { status: { in: statuses } }, { batchVersion: { not: "v1" } }, ], }), ...(periodMs && { createdAt: { gte: new Date(Date.now() - periodMs) }, }), ...(from && { createdAt: { gte: new Date(from) } }), ...(to && { createdAt: { lte: new Date(to) } }), }, orderBy: { id: direction === "forward" ? "desc" : "asc", }, take: pageSize + 1, });🧰 Tools
🪛 Biome (1.9.4)
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead(lint/complexity/noBannedTypes)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (2)
75-78
: Consider using type-safe boolean checksWhile the logic is correct, consider using more explicit type-safe checks for boolean values:
- typeof isTest === "boolean" || + isTest !== undefined || rootOnly === true;This maintains consistency with how other boolean filters are checked in the codebase.
374-374
: Consider using nullish coalescing for tagsThe current implementation can be made more concise:
- tags: run.tags ? run.tags.sort((a, b) => a.localeCompare(b)) : [], + tags: (run.tags ?? []).sort((a, b) => a.localeCompare(b)),references/v3-catalog/src/trigger/batch.ts (3)
298-301
: Address TODO comments for test coverageThese TODO comments indicate missing test coverage for important batch trigger methods.
Would you like me to help implement these missing test cases?
284-286
: Consider extracting magic numbers as named constantsThe batch size (501) is used directly in the test. Consider extracting this and other magic numbers as named constants for better maintainability.
+const MAX_BATCH_SIZE = 500; +const TEST_OVERFLOW_SIZE = MAX_BATCH_SIZE + 1; -Array.from({ length: 501 }, (_, i) => ({ +Array.from({ length: TEST_OVERFLOW_SIZE }, (_, i) => ({ payload: { foo: `bar${i}` }, }))
590-610
: Consider using a dedicated performance testing utilityThe performance measurement using
performance.now()
could be structured better using a dedicated testing utility that provides consistent timing measurements and statistical analysis.Consider using a performance testing library or creating a utility function:
async function measurePerformance<T>( name: string, fn: () => Promise<T> ): Promise<T> { const start = performance.now(); const result = await fn(); const end = performance.now(); logger.debug(`${name} time`, { time: end - start }); return result; }apps/webapp/app/components/runs/v3/RunFilters.tsx (2)
134-139
: Consider grouping related filter typesThe filter types could be organized better by grouping related filters together (e.g., all ID-based filters).
Consider reorganizing the filter types array like this:
const filterTypes = [ // Status and environment filters { name: "statuses", title: "Status", icon: ( <div className="flex size-4 items-center justify-center"> <div className="size-3 rounded-full border-2 border-text-dimmed" /> </div> ), }, { name: "environments", title: "Environment", icon: <CpuChipIcon className="size-4" /> }, // Task and tag filters { name: "tasks", title: "Tasks", icon: <TaskIcon className="size-4" /> }, { name: "tags", title: "Tags", icon: <TagIcon className="size-4" /> }, // Time-based filters { name: "created", title: "Created", icon: <CalendarIcon className="size-4" /> }, { name: "daterange", title: "Custom date range", icon: <CalendarIcon className="size-4" /> }, // ID-based filters { name: "run", title: "Run ID", icon: <FingerPrintIcon className="size-4" /> }, { name: "batch", title: "Batch ID", icon: <Squares2X2Icon className="size-4" /> }, { name: "schedule", title: "Schedule ID", icon: <ClockIcon className="size-4" /> }, // Action filters { name: "bulk", title: "Bulk action", icon: <ListChecks className="size-4" /> }, ] as const;Also applies to: 178-182
710-734
: Improve disabled state UX in ShowChildTasksToggleWhile the logic is correct, the UX could be improved by providing feedback about why the toggle is disabled.
Consider adding a tooltip to explain the disabled state:
<Switch disabled={disabled} variant="small" label="Show child runs" checked={disabled ? true : showChildTasks} + title={disabled ? "Child runs are always shown when filtering by Batch, Run, or Schedule ID" : undefined} onCheckedChange={(checked) => { replace({ showChildTasks: checked ? "true" : undefined, }); }} />
packages/trigger-sdk/src/v3/shared.ts (1)
Line range hint
1214-1236
: Consider improving the default task identifier handlingUsing "unknown" as the default task identifier when
taskIdentifier
is missing could make debugging more difficult. Consider:
- Logging a warning when falling back to the default
- Using a more descriptive default that includes context about where it was generated
- return await handleTaskRunExecutionResult(item, item.taskIdentifier ?? "unknown"); + const defaultId = `unknown_task_${item.id ?? "no_id"}`; + if (!item.taskIdentifier) { + console.warn(`Missing task identifier for run ${item.id}, using ${defaultId}`); + } + return await handleTaskRunExecutionResult(item, item.taskIdentifier ?? defaultId);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
apps/webapp/app/components/runs/v3/RunFilters.tsx
(13 hunks)apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
(1 hunks)apps/webapp/app/presenters/v3/RunListPresenter.server.ts
(9 hunks)packages/trigger-sdk/src/v3/shared.ts
(30 hunks)references/v3-catalog/src/trigger/batch.ts
(2 hunks)references/v3-catalog/src/utils/types.ts
(1 hunks)references/v3-catalog/trigger.config.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts
[error] 103-103: Don't use 'BigInt' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'bigint' instead
(lint/complexity/noBannedTypes)
apps/webapp/app/presenters/v3/RunListPresenter.server.ts
[error] 170-170: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🪛 Gitleaks (8.21.2)
references/v3-catalog/src/trigger/batch.ts
470-470: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
471-471: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
494-494: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
495-495: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🔇 Additional comments (20)
references/v3-catalog/src/utils/types.ts (1)
1-8
: Verify usage in batch processing context
Let's ensure these utility types are being used correctly in the batch processing implementation.
✅ Verification successful
Types are correctly used in batch processing implementation
The utility types are properly used in batch processing context, specifically in references/v3-catalog/src/trigger/batch.ts
where they serve as type-level assertions to ensure type safety. The types are used to verify:
- Payload types for batch runs (
Run1Payload
,Run2Payload
, etc.) - Output types for batch runs (
Run1Output
,Run2Output
, etc.) - Task run payload and output types (
TaskRun1Payload
,TaskRun1Output
, etc.)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of these utility types in batch-related code
echo "Searching for usage of utility types in batch processing code..."
rg -t typescript "Expect|Equal|NotEqual" --glob "**/{batch,trigger}*.ts"
Length of output: 248
Script:
#!/bin/bash
# Search for usage of these utility types in all TypeScript files
echo "Searching for usage of utility types across TypeScript files..."
rg "Expect|Equal|NotEqual" --type ts
# Also find all batch-related TypeScript files to understand the context
echo -e "\nListing batch-related TypeScript files..."
fd -e ts -e tsx . | grep -i "batch\|trigger"
Length of output: 29657
references/v3-catalog/trigger.config.ts (1)
23-23
: Verify the relationship between retries and batch processing
The change to enable retries in development aligns with the PR's batch processing improvements. However, let's verify if this is necessary for testing the new batch functionality.
✅ Verification successful
Based on the search results and the original review comment context, I can now generate the final response:
Enabling retries in development is appropriate for batch processing
The change to enable retries in development environment is necessary and beneficial for testing the batch processing functionality, as the codebase shows:
- The project implements batch processing with size limits up to 500 items
- Batch operations have built-in retry mechanisms for handling failures
- Testing retry behavior in development helps catch issues with batch processing earlier
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for retry-related code in batch processing implementation
rg -A 5 "retry|maxAttempts" --type ts
# Look for batch processing configurations that might reference retries
ast-grep --pattern 'batch.$$$retries$$$'
Length of output: 100867
Script:
#!/bin/bash
# Search for batch-related retry configurations and usage
rg -A 5 "batch.*retry|retry.*batch" --type ts
# Look for batch processing configurations
ast-grep --pattern 'batch({
$$$
retry: {
$$$
}
$$$
})'
Length of output: 1687
Script:
#!/bin/bash
# Look for batch processing configurations in the specific file
rg -A 10 "batch" references/v3-catalog/trigger.config.ts
# Search for batch processing implementation details
rg -A 5 "batch.*size|maxBatchSize" --type ts
Length of output: 3091
apps/webapp/app/presenters/v3/BatchListPresenter.server.ts (5)
30-43
: LGTM! Well-structured class and method declaration
The parameters are well-organized and have appropriate defaults.
181-219
: LGTM! Well-structured result transformation
The transformation logic is thorough and includes proper error handling.
92-93
:
Add validation for period parsing
The period parsing needs validation to handle invalid inputs.
const periodMs = period ? parse(period) : undefined;
+if (period && periodMs === null) {
+ throw new Error(`Invalid period value: ${period}`);
+}
158-174
:
Add null checks for cursor access
The cursor access needs validation to handle edge cases.
switch (direction) {
case "forward":
- previous = cursor ? batches.at(0)?.id : undefined;
+ previous = cursor && batches.length > 0 ? batches[0].id : undefined;
if (hasMore) {
- next = batches[pageSize - 1]?.id;
+ next = batches[pageSize - 1].id;
}
break;
case "backward":
batches.reverse();
if (hasMore) {
- previous = batches[1]?.id;
- next = batches[pageSize]?.id;
+ previous = batches.length > 1 ? batches[1].id : undefined;
+ next = batches.length > pageSize ? batches[pageSize].id : undefined;
} else {
- next = batches[pageSize - 1]?.id;
+ next = batches.length === pageSize ? batches[pageSize - 1].id : undefined;
}
break;
}
54-80
:
Add organization-level access control to project query
The project query should be scoped to the user's organization to prevent unauthorized access.
const project = await this._replica.project.findFirstOrThrow({
select: {
id: true,
environments: {
select: {
id: true,
type: true,
slug: true,
orgMember: {
select: {
user: {
select: {
id: true,
name: true,
displayName: true,
},
},
},
},
},
},
},
where: {
id: projectId,
+ organization: {
+ members: {
+ some: {
+ userId: userId,
+ },
+ },
+ },
},
});
Likely invalid or redundant comment.
apps/webapp/app/presenters/v3/RunListPresenter.server.ts (4)
25-27
: LGTM: Type definitions are clear and well-structured
The new optional properties rootOnly
, batchId
, and runId
are properly typed and follow consistent naming conventions.
165-165
: Remove unnecessary optional chaining
The if (batch)
check already ensures batch
is not null.
- batchId = batch?.id;
+ batchId = batch.id;
186-188
: Respect explicit rootOnly preference
The current implementation might override an explicit user preference for rootOnly: true
.
- if (batchId || runId || scheduleId) {
+ if (rootOnly === undefined && (batchId || runId || scheduleId)) {
rootOnly = false;
}
301-303
: Handle NULL values in runTags filtering
The current implementation might not handle NULL values correctly in the tags filter.
- tags && tags.length > 0
- ? Prisma.sql`AND tr."runTags" && ARRAY[${Prisma.join(tags)}]::text[]`
+ tags && tags.length > 0
+ ? Prisma.sql`AND coalesce(tr."runTags", '{}') && ARRAY[${Prisma.join(tags)}]::text[]`
references/v3-catalog/src/trigger/batch.ts (1)
623-625
: 🛠️ Refactor suggestion
Add type safety to the payload parameter
The any
type for the payload parameter reduces type safety. Consider defining an interface for the expected payload structure.
+interface BatchTestPayload {
+ foo: string;
+}
-run: async (payload: any) => {
+run: async (payload: BatchTestPayload) => {
return payload;
}
Likely invalid or redundant comment.
apps/webapp/app/components/runs/v3/RunFilters.tsx (4)
102-105
: LGTM: Schema changes for new filter types
The new schema fields are properly typed and marked as optional, aligning with the PR's objective to enhance filtering capabilities.
854-970
: LGTM: BatchIdDropdown implementation
The implementation is consistent with RunIdDropdown and includes proper validation and error handling.
1002-1004
:
Fix schedule ID prefix validation inconsistency
The validation checks for "sched" but the error message expects "sched_".
-if (!scheduleId.startsWith("sched")) {
+if (!scheduleId.startsWith("sched_")) {
Likely invalid or redundant comment.
1026-1026
:
Fix incorrect label in ScheduleIdDropdown
The Label component shows "Batch ID" instead of "Schedule ID".
-<Label>Batch ID</Label>
+<Label>Schedule ID</Label>
Likely invalid or redundant comment.
packages/trigger-sdk/src/v3/shared.ts (4)
Line range hint 145-154
: LGTM: Enhanced batch trigger with idempotency support
The changes to batchTrigger
method align with the PR objectives by adding support for idempotency key TTL and improving batch processing capabilities.
523-609
: LGTM: Well-implemented batch trigger by ID
The implementation includes:
- Proper payload parsing and validation
- Comprehensive error handling
- Detailed telemetry with span attributes
- Idempotency key support with TTL
784-867
: LGTM: Robust batch trigger and wait implementation
The implementation includes:
- Context validation
- Proper error handling
- Comprehensive telemetry
- Support for different tasks within a batch
Line range hint 935-969
: Verify batch size limits in the API client
The PR objectives mention increasing batch sizes from 100 to 500. Let's verify this limit is properly enforced.
✅ Verification successful
From the search results, I can see evidence of the batch size limit being tested in the codebase. Specifically, in references/v3-catalog/src/trigger/batch.ts
, there's a test that verifies a 501 item batch fails:
await tasks.batchTrigger<typeof batchV2TestChild>(
"batch-v2-test-child",
Array.from({ length: 501 }, (_, i) => ({
payload: { foo: `bar${i}` },
}))
);
Let me verify if there are any other relevant batch size limit configurations.
Based on the search results, I can see clear evidence of the batch size limit being enforced at 500 items. Specifically:
- In
apps/webapp/app/env.server.ts
:
MAX_BATCH_V2_TRIGGER_ITEMS: z.coerce.number().int().default(500),
- In
references/v3-catalog/src/trigger/batch.ts
, there's a test that verifies a 501 item batch fails with the error message:
'400 "Batch size of 501 is too large. Maximum allowed batch size is 500."'
- In
docs/limits.mdx
, the batch size limits are explicitly documented:
| Limit | Details |
|:------- |:------------------ |
| Dev | At most 500 |
| Staging | At most 10 million |
| Prod | At most 10 million |
Batch size limit is properly enforced at 500 items
The PR objectives mentioning increasing batch sizes from 100 to 500 have been successfully implemented and are properly enforced through validation and documented in the limits documentation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for batch size limit configuration
rg -g '!dist' -g '!*.test.*' -g '!*.spec.*' "batchTrigger" -A 5 -B 5
Length of output: 112703
Script:
#!/bin/bash
# Search for batch size limit constants or configurations
rg -g '!dist' -g '!*.test.*' -g '!*.spec.*' "500" -B 3 -A 3
Length of output: 101626
if (!runId.startsWith("run_")) { | ||
error = "Run IDs start with 'run_'"; | ||
} else if (runId.length !== 25) { | ||
error = "Run IDs are 25 characters long"; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the search results, I can now generate the final response:
Run ID length validation is incorrect
The validation enforcing a length of 25 characters appears to be incorrect. From the examples found in the codebase (e.g., "run_pxxs52j3geik6cj6j8piq"), Run IDs follow the pattern of "run_" prefix followed by 21 characters, making the total length 25 characters.
apps/webapp/app/components/runs/v3/RunFilters.tsx
: Update the validation message to be more precise: "Run IDs should be 'run_' followed by 21 characters"
🔗 Analysis chain
Verify Run ID length validation
The validation enforces a length of 25 characters, but this should be verified against the backend requirements.
Let me search for more validation patterns to get a complete picture.
Let me modify the search command to handle the file types correctly.
Let me try one final search without specifying file types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Run ID validation patterns in the codebase
rg -A 2 'run_[a-zA-Z0-9]{21}' # Standard Run ID pattern is usually 21 chars after prefix
Length of output: 1585
Script:
#!/bin/bash
# Search for Run ID validation patterns and examples in the codebase
rg -A 2 'runId|run_' --type ts --type tsx --type yaml
Length of output: 83
Script:
#!/bin/bash
# Search for Run ID validation patterns and examples in the codebase
rg -A 2 'runId|run_' --type typescript --type yaml
# Also search specifically for length validation patterns
rg -A 2 'length.*run|run.*length' --type typescript
Length of output: 176
Script:
#!/bin/bash
# Search for Run ID patterns and validation in the codebase
rg -A 2 'runId.*length|run_.*length|length.*run_'
# Also search for Run ID format in tests or schemas
rg -A 2 'run_[a-zA-Z0-9]' 'test|spec|schema|type'
Length of output: 2173
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
packages/trigger-sdk/src/v3/shared.ts (3)
576-662
: Consider performance optimizations for batch operationsWhile the batch trigger implementation is solid, consider these performance improvements:
- The
Promise.all()
for processing items could potentially be chunked for very large batches- Consider implementing request retries with exponential backoff for failed items
- The payload parsing could potentially be parallelized more efficiently
Example implementation for chunked processing:
-items: await Promise.all( +items: await Promise.all(chunk(items, 50).flatMap(async (batchChunk) => items.map(async (item) => { // ... existing implementation }) )
1462-1496
: Enhance error handling with more contextThe batch result handling could benefit from more detailed error information:
- Consider adding error codes for different failure scenarios
- Include more context in error messages
- Add timing information for failed operations
Example implementation:
async function handleBatchTaskRunExecutionResult<TIdentifier extends string, TOutput>( items: Array<TaskRunExecutionResult>, taskIdentifier: TIdentifier ): Promise<Array<TaskRunResult<TIdentifier, TOutput>>> { + const startTime = Date.now(); const someObjectStoreOutputs = items.some( (item) => item.ok && item.outputType === "application/store" ); if (!someObjectStoreOutputs) { const results = await Promise.all( items.map(async (item) => { - return await handleTaskRunExecutionResult<TIdentifier, TOutput>(item, taskIdentifier); + try { + return await handleTaskRunExecutionResult<TIdentifier, TOutput>(item, taskIdentifier); + } catch (error) { + return { + ok: false, + id: item.id, + taskIdentifier, + error: createErrorTaskError({ + ...error, + code: 'BATCH_EXECUTION_ERROR', + context: { + processingTime: Date.now() - startTime, + failurePhase: 'execution' + } + }) + }; + } }) );
523-575
: Consider implementing dynamic batch size limitsThe PR objectives mention increasing batch sizes from 100 to 500. While the implementation supports this, consider:
- Making the batch size limit configurable
- Implementing dynamic batch size adjustment based on system load
- Adding monitoring for batch processing performance
This would help prevent system overload while maintaining flexibility.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
packages/trigger-sdk/src/v3/batch.ts
(1 hunks)packages/trigger-sdk/src/v3/shared.ts
(30 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/trigger-sdk/src/v3/batch.ts
🔇 Additional comments (4)
packages/trigger-sdk/src/v3/shared.ts (4)
Line range hint 26-76
: LGTM: Well-structured type system for batch operations
The new types provide strong type safety and clear separation between different batch operation modes. The type hierarchy is well-designed to support both task-by-id and task-by-task operations.
Line range hint 77-100
: LGTM: Comprehensive type exports
All necessary types are exported to support the new batch operation capabilities. The exports maintain consistency with the internal type system.
Line range hint 145-154
: LGTM: Enhanced batch trigger support in task creation
The task creation functions have been properly updated to support the new batch operations and idempotency key TTL. The implementation maintains type safety and proper error handling.
Also applies to: 290-299
1195-1199
: LGTM: Improved observability with span attributes
The core trigger functions now include comprehensive span attributes for better tracing and monitoring. The implementation properly handles idempotency key TTL and maintains robust error handling.
Also applies to: 1258-1279
runs.list
batch.retrieve
Added new batch.trigger and batch.triggerByTask methods that allows triggering multiple different tasks in a single batch:
Or if you are inside of a task, you can use
triggerByTask
:Batch dashboard improvements
Idempotency key changes
Idempotency keys now default expire after 30 days, and you can customize the expiration time when creating a new key by using the
idempotencyKeyTTL
parameter:Breaking Changes
idempotencyKey
option fromtriggerAndWait
andbatchTriggerAndWait
, because it can lead to permanently frozen runs in deployed tasks. We're working on upgrading our entire system to support idempotency keys on these methods, and we'll re-add the option once that's complete.Summary by CodeRabbit
Release Notes
New Features
batch.trigger
,batch.triggerByTask
, andbatch.triggerByTaskAndWait
.runs.subscribeToBatch
.TimeFrameFilter
andScheduledTaskForm
components to explicitly handle timestamps in UTC.Bug Fixes
Documentation
Style