trevelyan on April 25, 2012
This is a great list, Xiao Hu. There are so many suggestions here and in your next post, some of which are immediately practical and some of which are a lot more speculative or difficult to reliably script. So this deserves a really lengthy post, but since we're at a cafe with limited online time, let me try to rush this.

First -- yes! -- we're definitely planning to expand the number and range of questions offered in the default popup review. The big conceptual restriction for these questions is that they have to make sense in the context of students who are studying the lessons in any order. The reason for this is that we are trying to emphasize to people that there ARE these review tools built into the system, so more people are aware that we're building (or trying to build) a more comprehensive study system and not just throwing out podcasts.

What this means practically is that the very granular percentage type questions which map to an objective cognitive map of grammar and vocab concepts is going to be difficult to incorporate by default. But what should work are questions which can be auto-generated given the materials explicitly covered in the lessons (sentences, vocabulary, etc.). We will be expanding the number and type of questions in the next few months. We'll go through this list and see how many we can come up with. Then allow people to enable and disable on the review tab.

Identify the radical, etc. are nice ideas we have not thought of, but which should be relatively easy for us to put together. We've been looking at a way to incorporate more character etymology into the lesson process as a result of feedback from a lot of people (thanks pefferie!). So this would be a nice complement to that sort of material.

That said... I don't think we'll be able to push more granular materials into the popup review by default. That is more the sort of direction we were (and still are) hoping to go with the Popup Chinese test which is still available but has been somewhat de-emphasized. Right now we have a backend "grammar map" of concepts and questions and aim to push the test in this direction ("strong grammar, but your listening skills are weak, etc.") in order to simplify the process of recommending materials that people will find more useful and avoiding testing people on stuff they already know. So this is definitely on the roadmap, but it is also really, really difficult. As mentioned below, we are still working on getting explanations for our existing questions into the system so it is still a work in progress. But the backend grammar map is also quite messy and feels too arbitrary for my tastes. Reliable categorization is often difficult and sometimes very arbitrary. It is going to require a lot of hands on work to nail out something that is academically sound and also comprehensive enough.

I'll look at implementing a groups feature. Some of these ideas are also sort of steadily percolating waiting for technical improvements to make them more feasible. For a case in point, I've actually already looked at voice recognition -- the issue right now is that the technology isn't there yet in terms of its accuracy and reliability (is a system that gets 80% accuracy useful or misleading?) and it would seem to be more of a sales pitch rather than a really useful tool. There are also minor issues with flash which mean it would involve recording and waiting while data is sent to the server for analysis. So the usability. Maybe a separate app would make things easier if we could push the computation itself to smart phones. There may be more limited and fun ways to handle this.

signin to reply
* we'll automatically turn your links into html.