@@ -69,7 +69,13 @@ Model inference with CLDK starts with a local LLM server. We'll use Ollama to ho
69
69
```
70
70
71
71
=== "macOS"
72
- On macOS, Ollama runs automatically after installation. You can verify it's running by opening Activity Monitor and searching for "ollama".
72
+ On macOS, Ollama runs automatically after installation.
73
+
74
+ You can check the status with:
75
+ ```shell
76
+ launchctl list | grep "ollama"
77
+ ```
78
+
73
79
74
80
## Step 2: Pull the code LLM.
75
81
@@ -83,6 +89,16 @@ Model inference with CLDK starts with a local LLM server. We'll use Ollama to ho
83
89
ollama run granite-code:8b-instruct ' Write a function to print hello world in python'
84
90
` ` `
85
91
92
+ You should see a response like:
93
+ ` ` ` shell
94
+ ❯ ollama run granite-code:8b-instruct ' Write a function to print hello world in python'
95
+ ` ` ` python
96
+ def say_hello ():
97
+ print(" Hello World!" )
98
+ ```
99
+ ```
100
+
101
+
86
102
# # Step 3: Download Sample Codebase
87
103
88
104
We' ll use Apache Commons CLI as our example Java project:
@@ -106,7 +122,7 @@ export JAVA_APP_PATH=/path/to/commons-cli-1.7.0
106
122
107
123
Let' s build a pipeline that analyzes Java methods using LLMs. Create a new file ` code_summarization.py` :
108
124
109
- ` ` ` python title=" code_summarization.py" linenums=" 1" hl_lines=" 7 10 12-17 21-22 24-25 34-37 "
125
+ ` ` ` python title=" code_summarization.py" linenums=" 1" hl_lines=" 7 10 12-17 24-25 27-28 39 "
110
126
import ollama
111
127
from cldk import CLDK
112
128
from pathlib import Path
@@ -124,6 +140,9 @@ for file_path, class_file in analysis.get_symbol_table().items():
124
140
for type_name, type_declaration in class_file.type_declarations.items():
125
141
# Iterate over methods
126
142
for method in type_declaration.callable_declarations.values(): # (3)!
143
+ # Skip constructors
144
+ if method.is_constructor:
145
+ continue
127
146
# Get code body
128
147
code_body = Path(file_path).absolute().resolve().read_text()
129
148
@@ -143,7 +162,7 @@ for file_path, class_file in analysis.get_symbol_table().items():
143
162
# Prompt Ollama
144
163
summary = ollama.generate(
145
164
model=" granite-code:8b-instruct" , # (6)!
146
- prompt=instruction).get(" response" ) # (7)!
165
+ prompt=instruction).get(" response" )
147
166
148
167
# Print output
149
168
print(f" \n Method: {method.declaration}" )
@@ -156,8 +175,7 @@ for file_path, class_file in analysis.get_symbol_table().items():
156
175
3. In a nested loop, we can quickly iterate over the methods in the project and extract the code body.
157
176
4. CLDK comes with a number of treesitter based utilities that can be used to extract and manipulate code snippets.
158
177
5. We use the ` sanitize_focal_class()` method to extract the focal class for the method and sanitize any unwanted code in just one line of code.
159
- 6. Try your favorite model for code summarization. We use the `granite-code:8b-instruct` model in this example.
160
- 7. We prompt Ollama with the sanitized class and method declaration to generate a summary for the method.
178
+ 6. We use the ` granite-code:8b-instruct` model in this example. Try a different model from [Ollama model library](https://ollama.com/library).
161
179
---
162
180
163
181
# ## Running `code_summarization.py`
0 commit comments