-
Notifications
You must be signed in to change notification settings - Fork 0
Skills and Knowledge
Skills are the units of behavior in Agents.KT. An agent does nothing on its own -- all its capabilities come from its skills. A skill is either a pure Kotlin function or an LLM-driven capability backed by tools and knowledge. This page covers skill definition, knowledge entries, and how the framework exposes skill metadata to the LLM.
Skill<IN, OUT> is a named, described capability with typed input and output:
class Skill<IN, OUT>(
val name: String,
val description: String,
val inType: KClass<*>,
val outType: KClass<*>,
)-
name-- Unique within the agent. Used for routing, logging, and LLM descriptions. -
description-- Mandatory. Tells the LLM (and human readers) what this skill does. Used in skill routing prompts andtoLlmDescription(). -
inType/outType-- KClass references captured viareifiedgenerics. The agent usesoutTypeduringvalidate()to ensure at least one skill matches the agent'sOUTtype.
A skill is itself callable: it implements operator fun invoke(input: IN): OUT.
The most common pattern. Define skills directly inside an agent's skills { } block:
val agent = agent<String, String>("formatter") {
skills {
skill<String, String>("uppercase", "Convert text to uppercase") {
implementedBy { it.uppercase() }
}
skill<String, String>("lowercase", "Convert text to lowercase") {
implementedBy { it.lowercase() }
}
}
}The skill<IN, OUT>(name, description) { } function inside SkillsBuilder creates the skill, applies the configuration block, and registers it with the agent in a single step.
When a skill is complex, shared across agents, or you want to reference it separately, define it standalone using the top-level skill() function, then add it with unaryPlus (+):
val codeReview = skill<String, String>("review", "Review code for issues") {
knowledge("style-guide", "Project coding standards") {
File("docs/style.md").readText()
}
implementedBy { code ->
// review logic
"Looks good: $code"
}
}
val agent = agent<String, String>("reviewer") {
skills {
+codeReview // unaryPlus registers the standalone skill
}
}The + operator (unaryPlus) on Skill<IN, OUT> inside a SkillsBuilder block registers the skill just like the inline version. This is pure syntax sugar -- the result is identical.
implementedBy { } sets the skill's implementation to a Kotlin lambda. The skill is not agentic -- no LLM is involved. The lambda receives the input and returns the output directly.
skill<Int, Int>("double", "Double the input value") {
implementedBy { it * 2 }
}Properties after calling implementedBy:
-
isAgenticisfalse -
implementationholds the lambda -
toolNamesisnull
Pure Kotlin skills execute instantly. They are the right choice when the behavior is deterministic and does not need LLM reasoning.
val tokenizer = skill<String, List<String>>("tokenize", "Split text into tokens") {
implementedBy { input ->
input.split("\\s+".toRegex()).filter { it.isNotBlank() }
}
}
tokenizer("hello world") // ["hello", "world"]Calling tools() marks the skill as LLM-driven. The framework enters the agentic loop (see Architecture Overview) when this skill is selected.
skill<String, String>("implement", "Implement a feature from a description") {
tools("read_file", "write_file", "compile")
}The vararg names parameter lists which of the agent's registered tools this skill may use. Pass no arguments to allow only knowledge tools and memory tools:
skill<String, String>("answer", "Answer a question using knowledge") {
tools() // agentic, but no action tools -- only knowledge tools are available
}Properties after calling tools():
-
isAgenticistrue -
toolNamesholds the list of tool name strings (possibly empty) -
implementationisnull-- the LLM drives execution
Important: tools() and implementedBy { } are mutually exclusive. Calling tools() clears any previously set implementation, and vice versa.
When the agent's OUT type is not String, the framework needs a way to parse the LLM's text response into a typed object. You have two options:
-
@Generableannotation -- The framework auto-generates a lenient JSON deserializer. See Architecture Overview for details on the generation package. -
transformOutput { }-- Manual parsing lambda on the skill.
data class Sentiment(val label: String, val score: Double)
skill<String, Sentiment>("analyze", "Analyze sentiment") {
tools()
transformOutput { raw ->
// raw is the LLM's text response
val parts = raw.split(",")
Sentiment(parts[0].trim(), parts[1].trim().toDouble())
}
}The transformOutput lambda takes the LLM's raw string response and returns OUT. It is called before the agent's castOut function, giving the skill full control over parsing.
If both transformOutput and @Generable are available, transformOutput takes priority.
Knowledge entries attach named, lazily-evaluated data to a skill. They serve as the skill's context -- reference material, documentation, configuration, or any data the LLM might need.
skill<String, String>("implement", "Implement a feature") {
knowledge("api-spec", "REST API specification") {
File("docs/api-spec.yaml").readText()
}
knowledge("db-schema", "Database table definitions") {
database.query("SELECT * FROM information_schema.tables").toString()
}
tools("write_file")
}Each knowledge() call takes three arguments:
| Parameter | Type | Purpose |
|---|---|---|
key |
String |
Unique name within the skill. Becomes the tool name in agentic mode. |
description |
String |
Tells the LLM what this knowledge contains. Used in tool descriptions and toLlmDescription(). |
provider |
() -> String |
Lambda that produces the knowledge content. Evaluated lazily. |
Agentic skills: Knowledge entries are exposed as tools. The LLM calls them by name when it needs the data. The provider lambda executes only on demand. This keeps system prompts small and avoids loading unused knowledge.
System prompt includes: "api-spec: REST API specification" (description only)
LLM calls tool: api-spec
Framework executes: provider() → returns file contents
Non-agentic skills: When toLlmContext() is called (e.g., for prompt construction), all knowledge entries are evaluated eagerly and their content is included inline.
Because the provider is a lambda, it runs each time it is called. If the underlying data changes between calls, the LLM gets the latest version:
var callCount = 0
skill<Int, Int>("add", "Adds one") {
knowledge("dynamic") { callCount++; "value $callCount" }
implementedBy { it + 1 }
}
// First call: knowledge["dynamic"]!!() → "value 1"
// Second call: knowledge["dynamic"]!!() → "value 2"Multiple agents or skills can share the same data source through closures. Because knowledge providers are lambdas, they can capture any external mutable state:
val corpus = mutableMapOf(
"style" to "Prefer val over var. Use data classes.",
"rules" to "Max line length 120. No wildcard imports.",
)
val coder = agent<String, String>("coder") {
skills {
skill<String, String>("write", "Write code") {
knowledge("style-guide", "Coding style rules") { corpus["style"]!! }
knowledge("rules", "Linting rules") { corpus["rules"]!! }
implementedBy { "fun ${it}() {}" }
}
}
}
val reviewer = agent<String, String>("reviewer") {
skills {
skill<String, String>("review", "Review code") {
knowledge("style-guide", "Coding style rules") { corpus["style"]!! }
knowledge("rules", "Linting rules") { corpus["rules"]!! }
implementedBy { "LGTM: $it" }
}
}
}
// Both agents see the same data. Update the corpus once, both see it:
corpus["style"] = "Prefer val over var. Use data classes. Use sealed interfaces."
// coder's knowledge now includes "sealed interfaces"
// reviewer's knowledge now includes "sealed interfaces"This pattern is especially powerful in pipelines, where an earlier agent can mutate shared state and a later agent sees the updates via its lazy knowledge providers:
val context = mutableMapOf<String, String>()
val extractor = agent<String, String>("extractor") {
skills {
skill<String, String>("extract", "Extract keywords") {
knowledge("context", "Shared pipeline context") { context.toString() }
implementedBy { input ->
val keywords = input.split(" ").filter { it.length > 3 }
context["keywords"] = keywords.joinToString(",")
keywords.joinToString(",")
}
}
}
}
val formatter = agent<String, String>("formatter") {
skills {
skill<String, String>("format", "Format with context") {
knowledge("context", "Shared pipeline context") { context.toString() }
implementedBy { input ->
val kw = context["keywords"] ?: "none"
"Formatted [$kw]: $input"
}
}
}
}
val pipeline = extractor then formatter
pipeline("The quick brown fox jumps")
// "Formatted [quick,brown,jumps]: quick,brown,jumps"The framework generates prompt text from skill metadata. There are three levels of description:
Returns a markdown description of the skill for use in routing prompts and system messages. Auto-generated from the skill's name, description, input/output types, and knowledge entry descriptions.
val s = skill<String, String>("summarize", "Condense text into a brief summary") {
knowledge("style-guide", "Writing style rules") { "..." }
tools()
}
println(s.toLlmDescription())Output:
## Skill: summarize
**Input:** String
**Output:** String
Condense text into a brief summary
**Knowledge:**
- style-guide -- Writing style rules
You can override this entirely:
skill<String, String>("summarize", "Summarize") {
llmDescription("You summarize text. Keep it under 3 sentences. Use active voice.")
tools()
}Returns the full description plus all knowledge content inlined. Used for non-agentic skills where knowledge cannot be loaded lazily via tools:
println(s.toLlmContext())Output:
## Skill: summarize
...
Knowledge:
--- style-guide ---
Use short sentences. Prefer active voice.
Returns a list of KnowledgeTool objects -- one per knowledge entry. The agentic loop converts these into ToolDef instances so the LLM can call them by name:
val tools: List<KnowledgeTool> = s.knowledgeTools()
// [KnowledgeTool(name="style-guide", description="Writing style rules", call=...)]When the LLM calls the style-guide tool, the framework invokes call(), which runs the knowledge provider lambda. The result is returned to the LLM as a tool response message.
| Feature | Pure Kotlin Skill | Agentic Skill |
|---|---|---|
| Defined with | implementedBy { } |
tools(...) |
isAgentic |
false |
true |
| Execution | Lambda called directly | LLM agentic loop |
| Tools available | None | Named tools from agent's toolMap
|
| Knowledge loading | Eager (via toLlmContext()) |
Lazy (via tool calls) |
transformOutput |
Not used | Parses LLM text to OUT
|
| LLM required | No | Yes (model { } must be configured) |
Getting Started
Core Concepts
Composition Operators
LLM Integration
- Model & Tool Calling
- Tool Error Recovery
- Skill Selection & Routing
- Budget Controls
- Observability Hooks
Guided Generation
Agent Memory
Reference