Skip to content

feat: migrate Gemini provider from OpenAI-compatible shim to native A…#563

Open
LeeroyDing wants to merge 6 commits intospacedriveapp:mainfrom
LeeroyDing:gemini
Open

feat: migrate Gemini provider from OpenAI-compatible shim to native A…#563
LeeroyDing wants to merge 6 commits intospacedriveapp:mainfrom
LeeroyDing:gemini

Conversation

@LeeroyDing
Copy link
Copy Markdown

…PI integration

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds native Gemini provider support: new llm::gemini module implementing client construction, non-streaming and streaming handlers, provider base URL update, routing changes to use native Gemini paths, and Gemini-specific token usage parsing.

Changes

Cohort / File(s) Summary
Provider Configuration
src/config/providers.rs
Changed GEMINI_PROVIDER_BASE_URL from "https://generativelanguage.googleapis.com/v1beta/openai" to "https://generativelanguage.googleapis.com/v1beta/".
Module Exports
src/llm.rs
Added pub mod gemini; to expose the new Gemini provider module.
Gemini Provider Implementation
src/llm/gemini.rs
New large module implementing build_client, call_gemini, and stream_gemini: model normalization, request construction (preamble, chat history, tools, thinking config), non-streaming and streaming execution, response conversion, and error mapping.
Provider Routing & Integration
src/llm/model.rs
Routed ApiType::Gemini to native handlers (call_gemini_native, stream_gemini_native), added native-call helper methods, adjusted usage extraction to branch on ApiType::Gemini, and prevented Gemini from using OpenAI-compatible endpoints.
Token Usage Extraction
src/llm/usage.rs
Added ExtendedUsage::from_gemini_body(&serde_json::Value) to parse Gemini usageMetadata fields into usage metrics (input/output/reasoning/cache tokens).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • jamiepine
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: migrate Gemini provider from OpenAI-compatible shim to native API...' is truncated but clearly describes the main change: migrating Gemini from an OpenAI-compatible implementation to native API integration, which aligns with the primary objective of this pull request.
Description check ✅ Passed The description '…PI integration' is a fragment of the full title continuation and directly relates to the changeset's primary objective of migrating to native Gemini API integration.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@LeeroyDing LeeroyDing marked this pull request as ready for review April 14, 2026 14:14
@LeeroyDing LeeroyDing mentioned this pull request Apr 14, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/gemini.rs`:
- Around line 267-268: The current serialization of the full Gemini response
uses serde_json::to_value(&response).unwrap_or_default(), which silently drops
errors and can produce a null body; replace that pattern so serialization
failures are not discarded: change
serde_json::to_value(&response).unwrap_or_default() to explicitly handle the
Result (match or ?), logging or returning the error (e.g., via trace/error or
propagating the Error from the enclosing function) and ensure you always persist
the latest streamed body instead of leaving it null — specifically update the
`body` assignment for RawResponse, the stream-path update in the
`usage_metadata` branch, and the other occurrences noted around lines 323-331
and 355-357 so they either propagate the serialization error or log it and
fallback to the last successful streamed body rather than defaulting to null.
- Around line 162-186: The code currently swallows non-text UserContent variants
in the match inside the loop over content, which can drop
images/audio/video/documents and produce incomplete Gemini requests; update the
"_" branch (in the match over rig::message::UserContent within the loop that
builds text_parts and tool_responses) to either (a) convert those variants into
the corresponding Gemini Part objects and append them to the outgoing request
parts (so the request includes the multimodal data), or (b) return an explicit
error/Err result indicating unsupported multimodal input (with clear context
including the offending variant id), rather than silently ignoring them; ensure
you reference and update whatever collection is used to accumulate Gemini Parts
so downstream code sees the converted parts.
- Around line 179-205: The code collapses repeated tool calls because it uses
the same tool name for both the Gemini function identity and the tool-invocation
id; fix by preserving the original function name separately (e.g., keep
function_call.name as tool_name) and synthesize a unique invocation id for
correlation (e.g., append an incrementing index or UUID when building
ToolCall.id) so each invocation is distinct; update the places that create
ToolCall.id and the code that constructs the Gemini function-response name
(where ToolResult.id is sent back and where
Content::function_response_json(&name, ...), GeminiMessage, with_message(), and
with_user_message() are used) to use the synthesized unique id for correlation
while still including the original function_call.name/tool_name in the payload
or meta so you can match results to the correct function.
- Around line 28-40: build_client currently ignores ProviderConfig.base_url and
naively prepends "models/" to every model_name; update it so it respects custom
endpoints by passing provider_config.base_url (when present) into the Gemini
client constructor instead of always calling Gemini::with_model with only
api_key, and fix model name handling in build_client by only prepending
"models/" when model_name has no resource prefix (e.g., does not start with
"models/", "tunedModels/", "cachedContent/", etc.); alternatively, if the gemini
client doesn't support custom endpoints, fail fast with a clear CompletionError
mentioning base_url. Locate build_client and the Gemini::with_model call and
adjust to use ProviderConfig.base_url and a guarded model_name prefix check.

In `@src/llm/model.rs`:
- Around line 587-593: The usage parsing currently branches on the literal
provider string (`self.provider`) which misses cases where a provider is
configured with `provider_config.api_type = "gemini"`; update the selection that
calls `crate::llm::usage::ExtendedUsage::from_anthropic_body`,
`from_gemini_body`, and `from_openai_body` to branch on the resolved `ApiType`
(e.g., `ApiType::Anthropic`, `ApiType::Gemini`, `ApiType::OpenAi`) instead of
`self.provider`, and thread the resolved `ApiType` value through the helper call
site so the correct `from_gemini_body` path is taken for any provider whose
`api_type` is Gemini; apply the same change to the analogous branch around the
other occurrence (previously lines ~1775-1781) so both usage-parsing sites use
`ApiType` dispatch rather than the provider id.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b35ca4e9-97f6-4ffc-b186-e2ec9a04c5ee

📥 Commits

Reviewing files that changed from the base of the PR and between df4ac10 and ceaabc3.

⛔ Files ignored due to path filters (2)
  • Cargo.lock is excluded by !**/*.lock, !**/*.lock
  • Cargo.toml is excluded by !**/*.toml
📒 Files selected for processing (5)
  • src/config/providers.rs
  • src/llm.rs
  • src/llm/gemini.rs
  • src/llm/model.rs
  • src/llm/usage.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/llm/gemini.rs (2)

188-210: ⚠️ Potential issue | 🔴 Critical

Tool results are sent back under the UUID instead of the declared function name.

Lines 296 and 363 synthesize call_* ids for Gemini tool calls, but this branch feeds tool_result.id back into function_response_json(). After the first tool execution, Gemini will see call_... as the function name instead of the tool it invoked, so tool-call round-trips break. Keep the original tool name alongside the synthetic correlation id and use the tool name here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/gemini.rs` around lines 188 - 210, The bug is that you feed the
synthetic correlation id (tool_result.id / the call_* id) back into
function_response_json instead of the original declared tool name; update the
code that builds tool_responses so it preserves and pushes the original tool
name (e.g., the declared function name) alongside the result JSON, and then call
Content::function_response_json(&original_tool_name, result_json) in the loop;
locate the code that creates tool_responses and change the tuple to carry the
declared tool name (not tool_result.id) so function_response_json receives the
real tool name.

339-351: ⚠️ Potential issue | 🟠 Major

The streamed raw body can still end up stale or Null.

accumulated_body only changes inside the usage_metadata branch, so later chunks without usage data are invisible to the final RawStreamingResponse on Line 377. In the no-usage case it stays Null, and both ignored try_lock() failures have the same effect. Update the body on every chunk and handle lock failures explicitly.

As per coding guidelines, "Don't silently discard errors. No let _ = on Results. Handle them, log them, or propagate them."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/gemini.rs` around lines 339 - 351, The bug: accumulated_body is only
updated when chunk.usage_metadata is Some, causing later chunks without usage to
leave accumulated_body null; also try_lock failures are ignored. Fix by moving
the serde_json::to_value(&chunk) -> val update of accumulated_body outside the
usage_metadata branch so every chunk updates accumulated_body, and replace the
silent try_lock ignores with explicit error handling/logging (e.g., if let
Err(e) = accumulated_body.try_lock() { tracing::warn!(...) } and similarly for
accumulated_usage), keeping the existing convert_usage(usage_meta) and
accumulated_usage assignment inside the usage_metadata branch; ensure both locks
are attempted each chunk and any lock or serialization errors are logged rather
than discarded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/gemini.rs`:
- Around line 224-243: The loop over content currently drops any unsupported
AssistantContent variants via `_ => {}`, losing history; update that branch in
the code that builds Parts from AssistantContent (the match on AssistantContent,
which produces Part::Text and Part::FunctionCall and uses fields like thought
and thought_signature) to either (a) convert unknown variants into a safe
fallback Part (e.g., Part::Text containing a serialized/diagnostic
representation of the variant and preserve any available signature/thought
fields) or (b) return/propagate a clear error (not silently ignore) so the
caller can handle it; replace the `_ => {}` with one of these behaviors and use
the function’s existing error type or logging to report the unsupported variant.

---

Duplicate comments:
In `@src/llm/gemini.rs`:
- Around line 188-210: The bug is that you feed the synthetic correlation id
(tool_result.id / the call_* id) back into function_response_json instead of the
original declared tool name; update the code that builds tool_responses so it
preserves and pushes the original tool name (e.g., the declared function name)
alongside the result JSON, and then call
Content::function_response_json(&original_tool_name, result_json) in the loop;
locate the code that creates tool_responses and change the tuple to carry the
declared tool name (not tool_result.id) so function_response_json receives the
real tool name.
- Around line 339-351: The bug: accumulated_body is only updated when
chunk.usage_metadata is Some, causing later chunks without usage to leave
accumulated_body null; also try_lock failures are ignored. Fix by moving the
serde_json::to_value(&chunk) -> val update of accumulated_body outside the
usage_metadata branch so every chunk updates accumulated_body, and replace the
silent try_lock ignores with explicit error handling/logging (e.g., if let
Err(e) = accumulated_body.try_lock() { tracing::warn!(...) } and similarly for
accumulated_usage), keeping the existing convert_usage(usage_meta) and
accumulated_usage assignment inside the usage_metadata branch; ensure both locks
are attempted each chunk and any lock or serialization errors are logged rather
than discarded.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 61f95d77-ea5f-48a2-94c8-61f672fc52c4

📥 Commits

Reviewing files that changed from the base of the PR and between ceaabc3 and 644c73e.

📒 Files selected for processing (2)
  • src/llm/gemini.rs
  • src/llm/model.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/llm/model.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/scratch.rs (2)

2-3: Rename tr to a non-abbreviated parameter name.

Line 2 uses tr, which violates the no-abbreviations naming rule. Prefer tool_result (and ideally a more descriptive function name than test).

✏️ Proposed rename
-fn test(tr: &ToolResult) -> &str {
-    &tr.name
+fn tool_result_name(tool_result: &ToolResult) -> &str {
+    &tool_result.name
 }

As per coding guidelines: "Don't abbreviate variable names. Use queue not q, message not msg, channel not ch. Common abbreviations like config are fine."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scratch.rs` around lines 2 - 3, Rename the abbreviated parameter tr in
the function test to a descriptive name (e.g., tool_result) and update all
references inside the function (currently &tr.name) to match; also consider
renaming the function test to a more descriptive name (e.g., get_tool_name) so
the signature becomes something like fn get_tool_name(tool_result: &ToolResult)
-> &str, ensuring ToolResult and its .name access remain correct.

1-4: Avoid introducing a standalone scratch module for this helper.

This file is a tiny scratch helper and doesn’t look like a new logical component. Please fold this into an existing relevant module (or test module) instead of keeping a dedicated src/scratch.rs file.

As per coding guidelines: "Don't create many small files. Implement functionality in existing files unless it's a new logical component."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scratch.rs` around lines 1 - 4, The standalone helper function
test(&ToolResult) that returns &str should be removed from the isolated scratch
file and relocated into an existing relevant module or test module that already
uses rig::message::ToolResult; move the function (or inline its logic) into that
existing file, adjust imports to use rig::message::ToolResult there, and ensure
lifetimes remain valid when returning &tr.name (keep signature fn test(tr:
&ToolResult) -> &str or refactor to return String if borrowing is problematic);
delete the orphan scratch module after moving.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/gemini.rs`:
- Around line 388-392: The final extraction uses body_for_final.lock().unwrap()
and usage_for_final.lock().unwrap() which can panic on a poisoned mutex; change
these to handle poisoning gracefully by matching the lock result (e.g.,
lock().map_or_else(|poison| poison.into_inner(), |guard| guard.clone() or
.take()) or similar), log or warn if poisoning occurred, and provide sensible
defaults if needed before yielding
RawStreamingChoice::FinalResponse(RawStreamingResponse { body, usage }) so the
stream does not panic when a prior panic poisoned the mutex.

---

Nitpick comments:
In `@src/scratch.rs`:
- Around line 2-3: Rename the abbreviated parameter tr in the function test to a
descriptive name (e.g., tool_result) and update all references inside the
function (currently &tr.name) to match; also consider renaming the function test
to a more descriptive name (e.g., get_tool_name) so the signature becomes
something like fn get_tool_name(tool_result: &ToolResult) -> &str, ensuring
ToolResult and its .name access remain correct.
- Around line 1-4: The standalone helper function test(&ToolResult) that returns
&str should be removed from the isolated scratch file and relocated into an
existing relevant module or test module that already uses
rig::message::ToolResult; move the function (or inline its logic) into that
existing file, adjust imports to use rig::message::ToolResult there, and ensure
lifetimes remain valid when returning &tr.name (keep signature fn test(tr:
&ToolResult) -> &str or refactor to return String if borrowing is problematic);
delete the orphan scratch module after moving.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b4bcb0dc-7974-4d26-819d-044a3f2e9c1b

📥 Commits

Reviewing files that changed from the base of the PR and between 644c73e and 4d97be2.

📒 Files selected for processing (2)
  • src/llm/gemini.rs
  • src/scratch.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/gemini.rs`:
- Around line 39-44: The code incorrectly treats provider_config.base_url as an
Option; change the handling in the Gemini client setup: instead of `if let
Some(base_url) = &provider_config.base_url` treat base_url as a plain String —
either always call client =
client.with_base_url(provider_config.base_url.clone()) if base_url is required,
or guard with a presence check like `if !provider_config.base_url.is_empty()`
before calling client.with_base_url(...); update the block around
Gemini::with_model and client.with_base_url to use provider_config.base_url as a
String accordingly.
- Around line 231-256: The loop currently rejects AssistantContent::Reasoning,
causing hard-fail; instead update the match arm for AssistantContent::Reasoning
to convert the reasoning payload into a plain-text Part (e.g., Part::Text) using
the same pattern as AssistantContent::Text: push Part::Text with the reasoning
text (concatenate or extract the visible string like
convert_messages_to_openai_responses does), and populate thought and
thought_signature if the Reasoning variant exposes them (or set them to None if
not); leave ToolCall behavior and tool_map handling unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: eedc270e-6d7f-4725-9d5c-52e87e72da97

📥 Commits

Reviewing files that changed from the base of the PR and between 4d97be2 and 47102c7.

📒 Files selected for processing (1)
  • src/llm/gemini.rs

…tex safety

- Fix base_url handling: ProviderConfig.base_url is String, not
  Option<String>. Use is_empty() guard instead of if-let Some().
- Handle AssistantContent::Reasoning in Gemini provider by mapping
  reasoning content to thought-tagged Part::Text instead of returning
  a hard error. Follows the same pattern used in model.rs and
  compactor.rs.
- Poisoned mutex handling was already fixed in 47102c7 — no further
  changes needed there.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
src/llm/gemini.rs (1)

32-37: ⚠️ Potential issue | 🟠 Major

Keep explicit Gemini resource prefixes intact.

Line 33 only special-cases models/. A configured resource like tunedModels/... will be rewritten to models/tunedModels/..., which breaks valid native Gemini identifiers. Only prepend models/ for bare model ids.

🔧 Proposed fix
-    let model_str = if model_name.starts_with("models/") {
+    let model_str = if model_name.contains('/') {
         model_name.to_string()
     } else {
         format!("models/{model_name}")
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/gemini.rs` around lines 32 - 37, The current logic only checks
starts_with("models/") and prepends "models/" which incorrectly rewrites valid
Gemini resource prefixes like "tunedModels/..."; update the condition that
builds model_str so it only prepends "models/" for bare model ids (i.e., when
model_name contains no '/' character). Specifically, change the branch that sets
model_str (referencing model_str and model_name) to return
model_name.to_string() when model_name contains '/' (preserving any existing
prefix like "tunedModels/..."), and only format!("models/{model_name}") when
model_name has no '/'.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/gemini.rs`:
- Around line 166-223: The code currently accumulates all text into text_parts
and all tool results into tool_responses, changing the original ordering; modify
the content iteration in the block that processes rig::message::UserContent so
that when you encounter a ToolResult you first flush any buffered text_parts
into the builder (call with_user_message with the joined text and clear the
buffer) before emitting the function response (use
Content::function_response_json and with_message / GeminiMessage as done now);
likewise after the loop flush any remaining text_parts into current_builder so
messages are replayed in the original interleaved order (references: variables
text_parts, tool_responses, current_builder and methods with_user_message,
with_message, Content::function_response_json, GeminiMessage).
- Around line 452-465: The match arm for effort strings currently doesn't handle
the "max" value and falls through to the numeric/unknown branch; update the
pattern matching in the match on effort (the branch handling "auto" | "" |
"none" | "off" | "low" | "medium" | "high" | "minimal") to include "max" and map
it to Some(ThinkingConfig::new().with_thinking_level(ThinkingLevel::High));
ensure this change is made where ThinkingConfig and ThinkingLevel are used so
"max" is treated equivalent to "high" instead of hitting the unknown branch that
logs a warning.
- Around line 187-195: The parsed JSON assigned to result_json may be a
primitive/array, but Gemini requires functionResponse.response to be a JSON
object; change the logic around serde_json::from_str(&result_text) so that after
parsing you check result_json.is_object(), and if it is not an object (including
cases where parsing returned a primitive, array, number, boolean, or null),
replace/wrap it with serde_json::json!({"result": result_json}) before pushing
into tool_responses; keep using tool_map and tool_result.id as before to build
tool_name and push (i.e., update the handling in the block that currently sets
result_json and pushes (tool_name, result_json)).

---

Duplicate comments:
In `@src/llm/gemini.rs`:
- Around line 32-37: The current logic only checks starts_with("models/") and
prepends "models/" which incorrectly rewrites valid Gemini resource prefixes
like "tunedModels/..."; update the condition that builds model_str so it only
prepends "models/" for bare model ids (i.e., when model_name contains no '/'
character). Specifically, change the branch that sets model_str (referencing
model_str and model_name) to return model_name.to_string() when model_name
contains '/' (preserving any existing prefix like "tunedModels/..."), and only
format!("models/{model_name}") when model_name has no '/'.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3f3baf69-31fb-4cc0-8a0c-9a602ddbbad6

📥 Commits

Reviewing files that changed from the base of the PR and between 47102c7 and 0941dbe.

📒 Files selected for processing (1)
  • src/llm/gemini.rs

…ing levels

- Update  prefix check to avoid rewriting valid resource paths
- Fix message ordering by flushing accumulated text into builder upon
  encountering a function response instead of doing all text then all
  responses at the very end
- Map  thinking effort to
- Ensure the JSON generated for Gemini function responses is an Object,
  wrapping primitive JSON returns as  to satisfy the API
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant