Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
01Petard committed Feb 12, 2025
2 parents c4f07e4 + 6f70f4f commit 989b391
Show file tree
Hide file tree
Showing 5 changed files with 284 additions and 49 deletions.
2 changes: 1 addition & 1 deletion docs/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ export default defineConfig({
text: "杂谈",
items: [
// {text: "16形人格测试+政治倾向测试", link: "/杂谈/16形人格测试+政治倾向测试"},
{text: "关于yt-dlp", link: "/杂谈/关于yt-dlp"},
{text: "开发信息备忘录", link: "/杂谈/开发信息备忘录"},
{text: "设计模式的荼毒体现在哪?", link: "/杂谈/设计模式的荼毒体现在哪"},
{text: "对于大模型绘制UML图的调研", link: "/杂谈/对于大模型绘制UML图的调研"},
{text: "2024年常见Ollama大模型对比", link: "/杂谈/2024年常见Ollama大模型对比"},
Expand Down
2 changes: 1 addition & 1 deletion docs/杂谈/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ _<u>这就没必要搞个目录了吧?</u>_

[//]: # (0. [16形人格测试+政治倾向测试]&#40;./16形人格测试+政治倾向测试.md&#41;)

1. [关于yt-dlp](./关于yt-dlp.md)
1. [开发信息备忘录](./开发信息备忘录.md)
2. [设计模式的荼毒体现在哪?](./设计模式的荼毒体现在哪.md)
3. [对于大模型绘制UML图的调研](./对于大模型绘制UML图的调研.md)
4. [2024年常见Ollama大模型对比](./2024年常见Ollama大模型对比.md)
Expand Down
47 changes: 0 additions & 47 deletions docs/杂谈/关于yt-dlp.md

This file was deleted.

274 changes: 274 additions & 0 deletions docs/杂谈/开发信息备忘录.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
# 工具类

## yt-dlp——强大的视频下载工具

> 项目Git地址:[yt-dlp/yt-dlp: A feature-rich command-line audio/video downloader](https://github.com/yt-dlp/yt-dlp)
提取这个地址的视频:`https://www.bilibili.com/BV1j6DYYpExQ`

`-o '~/%(title)s.%(ext)s'`表示将视频文件放到用户桌面,文件名是视频名称

```shell
yt-dlp --cookies cookies.txt -o '~/Desktop/%(title)s.%(ext)s' 'https://www.bilibili.com/BV1j6DYYpExQ'
```

## 使用git时,去除".idea"、“.vscode”文件夹中的文件

```shell
git rm --cached -r ..idea\
```

## EasyConnect

> 代理服务器:`https://sslvpn.zjut.edu.cn/`
>
> 用户名:`学号`
>
> 密码:`密码`
## 手动给视频超分辨率

1.`ffmpeg`截取帧

```shell
ffmpeg -i .\tmp.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 .\tmp_in_1440\frame%08d.jpg
```

2. 用超分工具超分,我常用的是cugan和esrgan

```shell
.\realesrgan-ncnn-vulkan.exe -i tmp_in_1080 -o tmp_out_1080 -n realesr-animevideov3 -s 2 -f jpg
```

3.`ffmpeg`合并帧

```shell
ffmpeg -i .\tmp_out_1080\frame%08d.jpg -i .\1080.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 24 -pix_fmt yuv420p output_w_audio.mp4
```


# 舜宇贝尔

## 公司网络

```
WiFi名:Sunny-Guest
密码:1234567890
网络认证用户名:sunnybaer
密码:1234567890
```

## 个人账号

```
[email protected]
密码dddd
abc123!
Sunnybaer111!
```

## 连接服务器

```shell
ssh [email protected]
abcd123!
```

## 数据库

```shell
10.0.0.93:3306
root
m5Jwk.JR*Uxpbt^9f8jp
```

## 后端项目维护脚本(参考)

```shell
git pull
```

```shell
mvn clean package -Ptest
```

```shell
nohup java -Xmx3g -Xms3g -XX:+UseG1GC -jar /var/www/wms-pro/target/wms-1.0.jar > /dev/null 2>&1 &
```

```shell
ps -ef | grep java | grep wms | grep -v grep | awk '{print $2}' | xargs kill -15
```

## baer-tools在线工具箱

前端仓库:http://10.0.0.177/informatization/front/bear-tools-web

后端仓库:http://10.0.0.177/informatization/backend/baer-tools

> 上线页面:http://10.0.0.93:3000/#/
## 精工仓库管理系统

前端仓库:http://10.0.0.177/informatization/front/wms-pro-web

后端仓库:http://10.0.0.177/informatization/backend/wms-pro

> 上线页面:http://10.0.0.93:9528
>
> 用户:`shenzhuo`
>
> 密码:`acbd@1234`
## 精工生产日计划看板大屏

前台前端仓库:http://10.0.0.177/informatization/front/production-web

后台前端仓库:http://10.0.0.177/informatization/front/production

服务端仓库:http://10.0.0.177/informatization/backend/manufacturing_systems

> 前台页面:http://10.0.0.93:9528
>
> 后台页面:http://10.0.0.49:3142/dashboard/index
>
> 用户:`admin`
>
> 密码:`111111`
## 培训管理系统

前端仓库:http://10.0.0.177/informatization/front/tms-web

后端仓库:http://10.0.0.177/informatization/backend/train-system

> 用户页面:http://10.0.0.49:8065/frontDeskindex/homePage
>
> 后台页面:http://10.0.0.49:8065/index
>
> 用户:`test01`
>
> 密码:`Aa@000000`
## 延锋座椅数字化管理系统

前端仓库:http://10.0.0.177/informatization/front/yanfeng-admin

后端仓库:http://10.0.0.177/informatization/backend/data-agv-yanfeng

> 上线页面:`http://10.0.0.93:9000/`
>
> 用户:`admin`
>
> 密码:`acbd@1234`
## (undo)一站式采销平台

前端仓库:http://10.0.0.177/informatization/front/market-platform

后端仓库:http://10.0.0.177/informatization/backend/market-platform-server

## AI调研相关

API key

```
sk-cd1266cc951d4d7fbfee56f42b4d3d35
```

```
curl https://api.deepseek.com/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-cd1266cc951d4d7fbfee56f42b4d3d35" \
-d '{
"model": "deepseek-chat",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": false
}'
```

百炼key

```
sk-9fe2e1b0fd60405393c4195a0761044f
```

```
curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer sk-9fe2e1b0fd60405393c4195a0761044f" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-r1-distill-qwen-7b",
"messages": [
{
"role": "user",
"content": "9.9和9.11谁大"
}
]
}'
```

```
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer sk-9fe2e1b0fd60405393c4195a0761044f" \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen-plus",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "用萝卜、土豆、茄子做饭,给我个菜谱"
}
]
}'
```

```shell
ollama run deepseek-r1:1.5b
```

```shell
ollama run lrs33/bce-embedding-base_v1
```

> 这些环境变量提供了对 Ollama 服务器的各种配置选项,可以帮助优化性能、调试和管理模型加载等任务。
>
> - **OLLAMA_DEBUG**:显示额外的调试信息(例如,设置 `OLLAMA_DEBUG=1`)。
> - **OLLAMA_HOST**:设置 Ollama 服务器的 IP 地址(默认是 `127.0.0.1:11434`)。
> - **OLLAMA_KEEP_ALIVE**:模型在内存中保持加载的时长(默认是 `"5m"`)。
> - **OLLAMA_MAX_LOADED_MODELS**:每个 GPU 上加载的最大模型数量。
> - **OLLAMA_MAX_QUEUE**:请求队列的最大数量。
> - **OLLAMA_MODELS**:模型目录的路径。
> - **OLLAMA_NUM_PARALLEL**:允许的最大并行请求数量。
> - **OLLAMA_NOPRUNE**:启动时不修剪模型的二进制数据。
> - **OLLAMA_ORIGINS**:允许的来源列表,使用逗号分隔。
> - **OLLAMA_SCHED_SPREAD**:始终将模型调度到所有 GPU 上。
> - **OLLAMA_FLASH_ATTENTION**:启用闪电注意力(Flash Attention)。
> - **OLLAMA_KV_CACHE_TYPE**:K/V 缓存的量化类型(默认是 `f16`)。
> - **OLLAMA_LLM_LIBRARY**:设置 LLM 库,以绕过自动检测。
> - **OLLAMA_GPU_OVERHEAD**:每个 GPU 保留的 VRAM(显存)部分(以字节为单位)。
> - **OLLAMA_LOAD_TIMEOUT**:在放弃加载模型之前,允许的最大加载停滞时间(默认是 `"5m"`)。
>
>
```shell
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{"model":"deepseek-r1:1.5b","prompt":"写一首关于秋天的诗","max_tokens":50}'
```

```shell
ollama serve OLLAMA_MODELS="C:\Users\15203\.ollama\models\manifests\registry.ollama.ai\library\deepseek-r1"
```

```shell
curl -X POST http://localhost:11435/v1/execute -H "Content-Type: application/json" -d '{"model": "lrs33/bce-embedding-base_v1:latest","input": "这是一段需要生成嵌入的文本"}'
```

8 changes: 8 additions & 0 deletions docs/软件/nodejs相关命令.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,14 @@ node -v
npm -v
```

## 安装pnpm

> pnpm需要node18及以上版本
```shell
npm install pnpm -g
```

## 安装hexo

安装hexo
Expand Down

0 comments on commit 989b391

Please sign in to comment.