feat(bisaya-course): refine phase 4 didactics and enhance course content generation
All checks were successful
Deploy to production / deploy (push) Successful in 5m19s
All checks were successful
Deploy to production / deploy (push) Successful in 5m19s
- Corrected grammatical errors and improved the phrasing in the BISAYA_PHASE4_DIDACTICS, ensuring clarity and accuracy in the learning materials. - Updated the course content generation script to include lessons from phase 5, enhancing the overall structure and flow of the course. - Introduced a new vocabulary course content synchronization process, improving the integration of vocabulary resources across different modules. - Enhanced the VocabService to dynamically adjust temperature settings based on the mode, optimizing response generation for different contexts. - Added new localized titles and vocabulary entries in multiple languages, enriching the learning experience for users.
This commit is contained in:
@@ -24,18 +24,18 @@ ollama --version
|
||||
|
||||
## 2) Modell laden
|
||||
|
||||
Empfohlen fuer freien Schreib- und Korrekturmodus:
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:7b-instruct
|
||||
```
|
||||
|
||||
Optional kleinere Alternative (weniger RAM):
|
||||
Empfohlen fuer freien Schreib- und Korrekturmodus (CPU-freundlich):
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:3b-instruct
|
||||
```
|
||||
|
||||
Optional groessere Alternative (bessere Qualitaet, aber langsamer auf CPU):
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:7b-instruct
|
||||
```
|
||||
|
||||
## 3) Ollama-Server starten
|
||||
|
||||
```bash
|
||||
@@ -55,7 +55,7 @@ Der Server laeuft dann standardmaessig auf:
|
||||
Der Preset setzt:
|
||||
|
||||
- Base URL: `http://127.0.0.1:11434/v1`
|
||||
- Modell: `qwen2.5:7b-instruct`
|
||||
- Modell: `qwen2.5:3b-instruct`
|
||||
- API-Key: nicht erforderlich
|
||||
|
||||
## 5) Funktionstest
|
||||
@@ -79,12 +79,12 @@ Wenn die Antwort kommt, ist alles korrekt verbunden.
|
||||
- Modell erneut laden:
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:7b-instruct
|
||||
ollama pull qwen2.5:3b-instruct
|
||||
```
|
||||
|
||||
### Antwort langsam
|
||||
### Antwort langsam (haeufig bei CPU-only Servern)
|
||||
|
||||
- Kleineres Modell nutzen (`qwen2.5:3b-instruct`)
|
||||
- `qwen2.5:3b-instruct` als Standard nutzen
|
||||
- Andere GPU/CPU-Auslastung reduzieren
|
||||
|
||||
## Hinweise fuer A2-Ziel
|
||||
|
||||
Reference in New Issue
Block a user