refactor(exercises): standardize answer language handling across exercise scripts
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
- Introduced a mechanism to infer answer language based on question phrasing in multiple exercise scripts, enhancing consistency in exercise data. - Updated question formats to clarify the intent of exercises, improving user understanding and engagement. - Streamlined the code for better maintainability and clarity in exercise generation processes.
This commit is contained in:
@@ -134,6 +134,7 @@ function createFamilyWordsExercises(nativeLanguageName) {
|
||||
instruction: 'Wähle die richtige Übersetzung.',
|
||||
questionData: {
|
||||
type: 'multiple_choice',
|
||||
answerLanguage: 'target',
|
||||
question: `Wie sagt man "${nativeWord}" auf Bisaya?`,
|
||||
options: options
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user