refactor(exercises): standardize answer language handling across exercise scripts
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
All checks were successful
Deploy to production / deploy (push) Successful in 2m48s
- Introduced a mechanism to infer answer language based on question phrasing in multiple exercise scripts, enhancing consistency in exercise data. - Updated question formats to clarify the intent of exercises, improving user understanding and engagement. - Streamlined the code for better maintainability and clarity in exercise generation processes.
This commit is contained in:
@@ -349,7 +349,8 @@ function createFamilyConversationExercises(nativeLanguageName) {
|
||||
instruction: 'Übersetze den Bisaya-Satz ins ' + nativeLanguageName,
|
||||
questionData: JSON.stringify({
|
||||
type: 'multiple_choice',
|
||||
question: `Wie sagt man "${conv.bisaya}" auf ${nativeLanguageName}?`,
|
||||
answerLanguage: 'native',
|
||||
question: `Was bedeutet "${conv.bisaya}"?`,
|
||||
options: options
|
||||
}),
|
||||
answerData: JSON.stringify({
|
||||
@@ -379,6 +380,7 @@ function createFamilyConversationExercises(nativeLanguageName) {
|
||||
instruction: 'Was bedeutet dieser Bisaya-Satz?',
|
||||
questionData: JSON.stringify({
|
||||
type: 'multiple_choice',
|
||||
answerLanguage: 'native',
|
||||
question: `Was bedeutet "${conv.bisaya}"?`,
|
||||
options: options
|
||||
}),
|
||||
|
||||
Reference in New Issue
Block a user