Conversation
- Add rewrite view mode and Rewrite tab in sidebar (PenLine icon) - RewriteDiffView: document interface + AI assistant, save to documents API, currently stored in the Document Generator tab - DocumentGeneratorEditor: rewrite mode (no tool palette, simplified header)
…e-format-improvement updated upload process
- Add comprehensive property-based tests for TrendSearchInputSchema and TrendSearchEventDataSchema using fast-check - Validate input serialization round-trip through JSON to ensure data integrity - Test valid input acceptance with various query lengths, company context, and optional categories - Test invalid input rejection for empty, whitespace-only, and oversized inputs - Verify event payload structure and field preservation across serialization cycles - Add type definitions for trend search feature (TrendSearchInputSchema, TrendSearchEventDataSchema, SearchCategoryEnum) - Update package.json and pnpm-lock.yaml with fast-check dependency - Ensures type safety and data validation for ai-trend-search-engine feature
The note-taking tool previously only supported create/update, leaving the agent unable to retrieve notes it had created earlier in a session. This adds two missing actions called out by TODOs in the source: - list: queries all studyAgentNotes rows for the current session and supports an optional filters.tags array to narrow results down by tag - get: fetches a single note by ID, scoped to the same user + session The Zod schema is updated to include the two new action variants plus the filters field. The manageNotes return type gains notes?: StudyNote[] to carry the list result. The tool description and examples are updated to reflect the expanded capability.
Issue #186 asks for BATCH_UPLOAD_MAX_FILES to be centralised in constants.ts so that the batch upload route, upload queue UI, and documentation can all reference a single source of truth instead of hard-coding numbers in several places. Adds BATCH_UPLOAD_CONFIG with four values: MAX_FILES – cap on files per batch request (20) MAX_TOTAL_SIZE_MB – combined size guard for the whole batch (500 MB) MAX_CONCURRENT_UPLOADS – pipeline concurrency for OCR/embedding (3) QUEUE_POLL_INTERVAL_MS – front-end polling cadence for job status (2 s) Also exports BATCH_UPLOAD_MAX_FILES as a direct alias of BATCH_UPLOAD_CONFIG.MAX_FILES so call-sites that only need the file count can import a flat name without destructuring the object. The values are placed next to the existing DOCUMENT_LIMITS block where they are most discoverable, and follow the same `as const` pattern used throughout the file.
Currently extractRecommendedPages returns every page from every retrieved chunk, regardless of whether the AI actually referenced it in the generated answer. This floods the citation list with pages the model never used, as described in issue #90. Adds filterPagesByAICitation(aiResponse, candidatePages) to references.ts. The function parses the model's text for "page N" / "pages N-M" patterns (case-insensitive) and returns the intersection with the retrieved-chunk candidate set. Two safe-fallbacks are included to avoid regressions: 1. If the response contains no page citations (model answered without explicit references), all candidate pages are returned unchanged. 2. If every cited page falls outside the candidate set (hallucinated page number), all candidate pages are returned unchanged. AIQuery/route.ts now calls filterPagesByAICitation on the AI answer before putting recommendedPages into the JSON response. The new helper is also re-exported through services/index.ts so other endpoints can adopt it incrementally.
Co-authored-by: Cursor <[email protected]>
…erator (currently done by filtering)
fixed trigger.dev error, fixed upload azure dependency error
- Add comprehensive property-based tests for TrendSearchInputSchema and TrendSearchEventDataSchema using fast-check - Validate input serialization round-trip through JSON to ensure data integrity - Test valid input acceptance with various query lengths, company context, and optional categories - Test invalid input rejection for empty, whitespace-only, and oversized inputs - Verify event payload structure and field preservation across serialization cycles - Add type definitions for trend search feature (TrendSearchInputSchema, TrendSearchEventDataSchema, SearchCategoryEnum) - Update package.json and pnpm-lock.yaml with fast-check dependency - Ensures type safety and data validation for ai-trend-search-engine feature
Updated section headings in README.md for consistency and clarity.
…into feature/ai-trend-search-engine-local
Emojis Removed
…gine (feat) Ai trend search engine
feat/rewrite-engine
…eline-time delete document + inngest fix. No longer need to run pnpm inngest
Update README.md
…ployment fixing docker deployment
improve vercel deployment guide
modified landing page
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: aee9607fe5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| } else { | ||
| await updateBatchStatus(batchId, "complete", { | ||
| completedAt: now, |
There was a problem hiding this comment.
Keep batch failed when unretried files still exist
This branch marks the whole batch complete whenever the current filesToProcess run has no new failures, but it does not account for other files that may already be in failed status; after that, the same route blocks further commits for complete batches, so any unretried failed files become permanently stuck. This can happen when a user retries only a subset of failed uploads, so completion should be based on aggregate file statuses (e.g., no failed/queued files remain), not only failures.length from the current run.
Useful? React with 👍 / 👎.
| # Inngest — required in production for background job processing | ||
| INNGEST_EVENT_KEY: ${INNGEST_EVENT_KEY} | ||
| INNGEST_EVENT_KEY: ${INNGEST_EVENT_KEY:-} | ||
| INNGEST_DEV: http://inngest-dev:8288 |
There was a problem hiding this comment.
Remove default INNGEST_DEV override from app service
Setting INNGEST_DEV here forces the SDK into explicit dev mode and sends events to http://inngest-dev:8288, but this compose file only starts inngest-dev under the optional dev profile; in the default profile, inngest.send(...) calls fail with network errors and background jobs (document processing/trend-search) never get queued.
Useful? React with 👍 / 👎.
| TrendSearchInputSchema, | ||
| TrendSearchEventDataSchema, | ||
| SearchCategoryEnum, | ||
| } from "~/server/trend-search/types"; |
There was a problem hiding this comment.
Point trend-search tests at the lib/tools module path
This import path does not exist in the repository (~/server/trend-search/types), so the new trend-search suites fail during module resolution before any assertions run; for example, running pnpm test -- __tests__/api/trendSearch/types.pbt.test.ts errors with “Could not locate module ...”, which means these tests currently provide no regression coverage.
Useful? React with 👍 / 👎.
No description provided.