refactor(exercises): standardize answer language handling across exercise scripts
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
- Introduced a mechanism to infer answer language based on question phrasing in multiple exercise scripts, enhancing consistency in exercise data. - Updated question formats to clarify the intent of exercises, improving user understanding and engagement. - Streamlined the code for better maintainability and clarity in exercise generation processes.
This commit is contained in:
@@ -582,6 +582,7 @@ async function updateFoodCareExercises() {
|
||||
instruction: 'Wähle die richtige Übersetzung.',
|
||||
questionData: JSON.stringify({
|
||||
type: 'multiple_choice',
|
||||
answerLanguage: 'target',
|
||||
question: `Wie sagt man "${conv.native}" auf Bisaya?`,
|
||||
options: [
|
||||
conv.bisaya,
|
||||
@@ -608,6 +609,7 @@ async function updateFoodCareExercises() {
|
||||
instruction: 'Wähle die richtige Übersetzung.',
|
||||
questionData: JSON.stringify({
|
||||
type: 'multiple_choice',
|
||||
answerLanguage: 'native',
|
||||
question: `Was bedeutet "${conv.bisaya}"?`,
|
||||
options: [
|
||||
conv.native,
|
||||
|
||||
Reference in New Issue
Block a user