### XenoAmess's blog

By XenoAmess, history, 19 months ago, ,

http://codeforces.com/contest/940/submission/36654025

http://codeforces.com/contest/940/submission/36654074

IS THERE DIFFERENCE??????

PS.nothing offensive,but if you know why it happened，please telll me.I 've been in codeforces in years,and never seen thing like this.

If you are as.confused as me,then just go.

By the way,I noticed that this question gets some negatives,but with no comment.If you have some idea,just comment it.If not then you are not who I ask help to.Thanks.

• +17

By XenoAmess, history, 19 months ago, ,

@MikeMirzayanov

I used codeforces's API system.It is strong,and has good quality.but I think it can become better. Here is my suggestions. 1.Standing of the problemset. http://codeforces.com/problemset/standings I would be happy to get informations about it,but I havn't find a way to do so. 2.Sending more datas than I need. Resently I was developing a module of a program,which provide active degree of users, and I use http://codeforces.com/api/user.status?handle=Xeno_Amess&from=1&count=1000000000 (for example) to get data from codeforces.All the data I need was creationTimeSeconds and verdict,but codeforces send me lots of data unuseful to me,like testset,id,author... Actually nearly 19/20 of the data I got was unuseful. This is not only a bad thing to me,but also to codeforces,cause bouth of us cost 19 times more webnetwork bandwidth... Well, for me, download 4M's data or 200k's data has no difference , but for such a big website like codeforces, who have so many users,it might be defferent. So here I suggest that may codeforces provide a grammar to reject some JSONObject? For example,when I use http://codeforces.com/api/user.status?handle=Xeno_Amess&from=1&count=1, what I get is :

{
"status": "OK",
"result": [
{
"id": 32479205,
"contestId": 894,
"creationTimeSeconds": 1511111995,
"relativeTimeSeconds": 2147483647,
"problem": {
"contestId": 894,
"index": "D",
"name": "Ralph And His Tour in Binary Country",
"type": "PROGRAMMING",
"points": 2000,
"tags": [
"data structures",
"trees"
]
},
"author": {
"contestId": 894,
"members": [
{
"handle": "Xeno_Amess"
}
],
"participantType": "PRACTICE",
"ghost": false,
"startTimeSeconds": 1511099700
},
"programmingLanguage": "GNU C++14",
"verdict": "OK",
"testset": "TESTS",
"passedTestCount": 44,
"timeConsumedMillis": 1091,
"memoryConsumedBytes": 355635200
}
]
}

{
"status": "OK",
"result": [
{
"creationTimeSeconds": 1511111995,
"verdict": "OK"
}
]
}


I suggest there be a grammar to do such thing.Ofcause,this is just a friendly suggestion.

• +24

By XenoAmess, history, 3 years ago, ,

So this is today's problem.

443B - Kolya and Tandem Repeat

Although the question is so easy,there is lots of fun in it,if you go deep.

# 1.

First,why is it a easy problem?

The thing who makes this question so easy is the limit of n: n<=200.

First every high school students shall be able to get it AC by a O(n^3) sulution.

In which we can use one demention for the start, one demention for the end,one demention for the check.

Wow.A really bruteforce solution.

But in this question ,the solution runs so fast that everybody gets a 15msAC,which means the fastest CF can configure.

But,when we face bigger test data, the n^3 solution goes never so fast again.

What will happen when we add n to 5000?

We use one of the 15ms O(n^3) solution for test.

Here is the solution :

6916391

(Sorry Natureal,don't beat me,at least not to hard~)

And Here is the data :

22681015

(I random it.)

Well on my computer it runs for 38.846s.

# 2.

So if there a quicklier solution?

Ofcause there is.

We can use Rabin-Karp to speed up the third demention,in witch we check if a given begin and end can do,from O(n)to O(1).

That means the whole solution is O(n^2).

You can easily learn basic of Rabin-Karp from any website,so I don't explain for it,but I do give comment in code.

22681205

Use it to deal with the n=5000 data,and it runs for 0.723s.

Don't stop thinking.

38.846/0.723 = 53.729.

So why a n^2 solution faster than a n^3 solution only 54 times,while n == 5000?

Because Rabin-Karp has lots of [Modulo Operation],which is far slower than [simply check if equal] in that n^3.

# 3.

Let's get it better.

In that solution I check k and n.

when n < k,that's obvious that he can take the whole string(if it's odd,-1).

Or,we shall do the n^2.

maxa means the largest answer we had met,and it starts from 0.

As you can see,I use maxa to jump the answers whose len<=maxa.I only check substrings whose length>maxa.

But wait a minute.Why let maxa be 0?

At the begining we do have a solution of len=2*k,which means use the k spaces he add to mirror the last k words.

Also ,you can know that in my code,when the check-work get "yes",the second demention will still run,because the next

solution is always lager than the last in one loop.

So why don't we get it reversed?

We can do it from the largest to shortest.when check-work get "yes",simply break the second loop.

so we add the two optimization to our origion n^2 code.

22681668

And I get a 0.026s on my n=5000 data.

Why so fast?

Because I made that data n=k=5000.so the advanced n^2 solution will let maxa = k in the begining,just the max.

So the next n^2 = 0^2,so 0.026.

Well that's not fair because not every time we are so lucky to get k similar to n.

So I change k in the n=k=5000 data.now he is n=5000;k=1000 data.

And get a 0.143s.

Not fair again.so we use the original n^2 code to run n=5000;k=1000 data.

And get a 0.282s.

This time fair enough.

But notice that when k get more larger,the advanced code runs far more faster.

Interesting,right?

# 4.

So how to get it faster?

I know some of you knows about suffix-array.

All right,we do can use suffix-arry to deal with the strings totally in n,and use exgcd to deal with the strings who is in both n and k.

And that is O(n).Rignt,I know.

But I want a solution to get my code faster now.

Just imagine I am a green-hand,knowing nothing about suffix-array.Not every time we have high-tech weapons to deal with problems in contest.

OK HERE I DO REALLY HAVE A CRAZY SOLUTION!!!

Bad news is that it can only deal with randomed data.Targeted-and-well-designed data can hack it.But when data is randomed,it really works.

22683992

run n=5000;k=1000 data.(data is randomed)

get a 0.018s.

When n gets larger,the difference gets biggger.

run n=500000;k=100000 data.(data is randomed)

22683992 gets 0.387s

22681668 gets ...

10 mins later.

still running.

20 mins later.

still running.

And it has no sign like "Oh I will be out soon!".Nope.Maybe next time I shall write something like that to show me how far it has gone.

All right,what ever.

I'd go sleep now.Hope I can report when I wake tomorrow.

Actually it will be simpler to understand if you've learned bucket-sort.

The main idea is : if it do,then there is two same strings one close to another.And if it do,the two strings must be same.And if the two strings same,then their first three words is equal.AH,here is it.WE ONLY CHECK TWO STRINGS WITH SAME PREFIX OF LENGTH 3.

The strategy is really really really crazy,and since there is lots of great HACKERS on CF,I know it does not work.

But in other places,things could be different,especially in some places totally without HACK.

Or we can still see it as a joke.

By the way,if the data is godly-perfect randomed,the solution woule be O(26^3 + n^2 / 26^3 )

Ofcause it would't be.

Thanks for the the force of buckets!

See you next time~