Блог пользователя zenolus

Автор zenolus, история, 4 года назад, По-английски

Hello, Codeforces community!

Before going into the details, I would like to give a huge thanks to the community for the interest and support. I'm really awestruck to see over a thousand user handles having been entered in the web app in just barely 30 hours. (At the time of writing this blog.) If you haven't gone through the previous blog, here it is.

There were some issues when I first made it live. I am sorry for that. I am no expert in the backend, so you guys had to face a lot of initial downtimes. Although I've applied a temporary fix, there's still work to be done. Keeping that aside, I would like to thank inputs from manish_joshi, arthurg, RestingRajarshi, -is-this-fft- and everyone who checked it out and gave their reviews. I have made some tweaks based on that and there's more to do.

Okay, so regarding how problems are categorized,

Step 1: While parsing the problem set, I put them in slabs according to their ratings. <1200, 1200-1600, 1600-1900, 1900-2100, 2100-2400 and >2400. The reasoning behind this is that as per my experience, I found people getting stuck often in these belts for long durations before their rating spikes up to the next block. This also helps determine which problem tags are more common and more useful to be practiced.

Step 2: We get the max rating of the user and according to that, all future calculations are done. The idea was plain and simple that the higher we reach, the harder we need to practice for a positive delta. Even if your rating falls, you should aim for a higher rating.

Step 3: All the previous submissions are analyzed and a solvability score is calculated which denotes that the user is most likely to be able to solve problems with that many AC submissions. The calculation is simple as well. Solvability = summation of (solved count * problem rating)/4000 for each AC submission. The constant 4000 comes from the fact that the highest-rated problems are of R3500.

Step 4: The problem set is scanned to get suggestions on the basis of the following with priority ratio 1:2:3

  1. The absolute difference of a problem's rating from the user's max-rating.

  2. The problem tags are compared with the slab's sorted tag list (more problems of a tag => solving a problem of that tag would be more beneficial) and a tag score is calculated.

  3. The ratio of the solved count of that problem to the highest solved problem in the range suitable for the user.

The section "UpSolve from last contest" doesn't see any of the above. However, the newly added "Recommended upsolves from past contests" section, based on this suggestion, undergoes the above processing.

The analysis section is rather simple. It follows similar processing, though not this vivid. However, the catch is that in order to determine strengths and weaknesses, all submissions except those that are SKIPPED by the online judge are taken into consideration. An AC gives you a +1 for that tag but again, any other verdict gives you a -0.2 relatively.

Anyone interested can check out the complete project on my GitHub profile — ReactJS Frontend and MongoDB-ExpressJS-NodeJS Backend. I'm sorry for the messy code. I am still a complete newbie who wants to get into web dev.

Regarding updates, here I have some.

  1. As previously mentioned, I am working on Team practice mode and tag whitelisting and blacklisting.

  2. Secondly, I have a new idea in my mind to make the analysis part more useable. We have a stopwatch for everyone to see how much time they take to solve a problem. How about storing that data to make an analysis of how much time you have taken to solve the problem of a certain rating over the last 5-10 problems of the same rating? For example, the times taken to solve 5 of R1500 problems were 15min, 25min, 20min, 30min, and 20min. You don't need to keep track of that. In the analysis section, there would be a combined graph of problem ratings vs last 5-10 times taken. I think it can help analyze what kinda problems one faces more difficulty/takes more time in solving.

  3. Lastly, as -is-this-fft- said in the last post, there may be quite some portion of people who prefer a simple motherfuckingwebsite.com format over a fancy UI. I was thinking, why not? I can retain all the functionality and make a side page plain and simple for such people to use.

Let me know your thoughts/suggestions in the comments. Thanks.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +60
  • Проголосовать: не нравится

Автор zenolus, история, 4 года назад, По-английски

Hello, Codeforces community!

First of all, I would like to thank MikeMirzayanov for such an awesome platform. It's been almost a year since I joined and ever since I was always in the dilemma which question to solve. It's a commonly accepted fact that one should solve the most solved questions first. Many other recommender websites out there follow the same pattern — solved count for the win. However, after some days of following it, it kinda felt repetitive. Yes, the ones that everyone can solve would be the easier ones. But how do we determine if it can teach us something new? Codeforces has the problem rating system which gives a more accurate view of which question is easy and which is not. But as with the case of solved counts, rating can't be considered as the sole differentiator.

After having analyzed for some time, I found that there are three important factors that decide which problem is best suitable for practice — problem rating, problem tags, and solved count. The solved count is a good measure for determining which problems you are more likely to be able to solve. Problem rating can help us classify problems as easy, medium, or hard for us according to our rating. Thirdly, I would like to explain the importance of problem tags. It can be seen that some tags are more frequent in certain rating ranges. For example, if you check problems of rating 1300-1500, you are more likely to see implementation, brute-force, or greedy problems. So, someone who is in range just below this range can expect questions with these tags to be there in live contests let's say as a Div2 A/B. So, solving these problems will definitely help in ranking up.

Combining all these, I built UpSolve.me as a platform to give personalized suggestions for practice. The rating taken into consideration is the max rating of the user as I believe we must always look beyond what we have already achieved. Everyone has bad days and ratings fall. But it ain't bad targeting a high +ve.

May the codeFORCEs be with you!

Screenshots

Note: I am still working on the project. Team practice and tag-based practice mode will be arriving soon. Feel free to look around and give your opinions. Your involvement will only help me boost the platform. Any constructive criticism or feature requests are dearly welcome.

UPD: I didn't expect so many people flooding in at once. Heroku seems to not be able to handle these many requests at one go. I'll fix it asap. Stay tuned. Thanks!

UPD2: Moved from Heroku to Google Compute Engine. Feel free to explore it now!

UPD3: Things seemed to be corrected on GCE but again a problem popped up that https requests were not passing through. I have reverted it back to Heroku and have enabled it to restart instantly on crash so that you don't experience downtime. However, it's a temporary fix. If anyone can help me out setting it up on the GCE, please do reach out to me.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +349
  • Проголосовать: не нравится