### kostka's blog

By kostka, 15 months ago, Whether you’re looking to practice your coding, prepare for an upcoming interview, connect with a global community, or have some fun — Kick Start is here to help! Round G 2021 starts in less than 24 hours on October 16 starting at 12:00 UTC. You will have 3 hours to solve what you can during the round.

Hope to see you in Round G 2021!  Comments (35)
| Write comment?
 » 15 months ago, # | ← Rev. 3 →   Probably the worse kickstart round in last 2 years in regards to quality. Also idk why, as 2021 has progressed, the quality of kickstart has gone down by the time. AA — just simulate BB — you know this is something related to medians, then you try 95708430734 things regarding median, then you literally guess that you should put all boundaries (of horizontal/vertical type at once) in a single vector and sort it and use median and hope it works and you get AC CC — you know this is something related to 4sum-ish thing. You notice the very weird constraints and just try to submit with different things like map/unordered_map/gp_hash_table and see that they tle and suddenly see one of them work. DD — you see this is geometry, you close you machine and run as fast as you can.A very bad round in regards to quality for me.
•  » » A..B..C..Ready to argue, because you are purple while I was only cyan, then saw your comments about D. Yeah! 100% agreed!! Run ASAP!!!
 » What is the intended time complexity for problem C?
•  » » $O(N^2)$ worked for me
•  » » » How to do it in O(N^2)?
•  » » » » I used dynamic programming.My dp table was of size $k+1$ where $dp[i]$ told the minimum number of trees to get $i$ number of bananas. Initially $dp$ was equal to $0$ and for all other values of 1 through k was equal to $INF$ . Then I iterated through the input array calculating the sum of continuous subarrays. Once I had the sum of subarrays I calculated about the minimum number of trees required to get k bananas if we consider this subarray. Later I updated the dp table for subarrays that were ending at the $i$'th indice. Spoilerfor(int tc=1;tc<=t;tc++) { cout << "Case #" << tc << ": " ; int n,req; cin >> n >> req; int ar[n]; for(int i=0;i> ar[i]; int dp[req+1]; for(int i=1;i<=req;i++) dp[i]=1e9; dp=0; int ans=1e9; for(int i=0;ireq) break; else ans=std::min(dp[req-sum]+(j-i+1),ans); } sum=0; for(int j=i;j>=0;j--) { sum+=ar[j]; if(sum>req) break; else dp[sum]=min(dp[sum],(i-j+1)); } } if(ans>=1e9) ans=-1; cout << ans << "\n";}
•  » » O(n^2) with gp_hash_tables worked for me. tho maps and unordered_map tled
 » 15 months ago, # | ← Rev. 2 →   why N^2log(N) solutions were not allowed for C.
•  » » Because strictly N^2 solutions were allowed limiting other approaches to the same problemvoid solve() { ll n, k; cin >> n >> k; vector v(n); for(ll i = 0; i < n; i ++) { cin >> v[i]; } ll ans = INT_MAX; vector mse(k + 1, INT_MAX); vector> dp(n, vector(n, -1)); for(ll i = 0; i < n; i ++) { ll sum = 0; for(ll j = i; j < n; j ++) { sum += v[j]; dp[i][j] = sum; if(sum == k) { ans = min(ans, j - i + 1); } } } for(ll i = 0; i < n; i ++) { for(ll j = i; j < n; j ++) { ll sum = dp[i][j]; if(sum <= k && sum >=0 ) { ans = min(ans, mse[k - sum] + (j - i + 1)); } else { break; } } for(ll j = i; j >= 0; j --) { ll sum = dp[j][i]; if(sum <= k && sum >= 0) { mse[sum] = min(mse[sum], i - j + 1); } else break; } } if(ans == INT_MAX) { cout << "-1\n"; return; } cout << ans << endl; } 
•  » » That is not true I got full score with a $\mathcal{O}(n^2log n)$ algorithm
•  » » » Oh cool please share it.
•  » » » » My solutionll n,k; cin>>n>>k; ll ans=INF_MUL; vector a(n+5,0); vector> track[k+5]; vector last(k+5,INF_MUL); for(ll i=1;i<=n;i++){ cin>>a[i]; ll now=0; for(ll j=i;j>=1;j--){ now+=a[j]; if(now==k){ ans=min(ans,i-j+1); } ll y=k-now; if(y<0){ break; } pair look={j,0}; if(last[y]>=j){ auto it=lower_bound(track[y].begin(),track[y].end(),look); if(it!=track[y].begin()){ it--; ans=min(ans,i-j+1+((*it).s)); } } else if(last[y]!=INF_MUL){ ans=min(ans,i-j+1+((track[y].back()).s)); } if(track[now].empty()){ last[now]=i; track[now].pb({i,i-j+1}); } else{ if(track[now].back().s>(i-j+1)){ track[now].pb({i,i-j+1}); last[now]=i; } } } } if(ans==INF_MUL){ ans=-1; } cout<
•  » » » My solution was for each $K$ possible sum I created a set of pairs (pos, val), where pos is the start of the interval and val is the number of trees needed for this case, also I guarrante that I never insert pairs $(a, b)$, $(c, d)$ with $ad$. The idea is going from right to left consider all intervals that start at a given point, let $s$ be the sum of an interval $[l, r]$, you search for the first pair on the set corresponding to $sum = k-s$, such that $pos > r$ and combine their values.
•  » » » » An idea similar to this TLEd for me :( I used binary search to search for the other interval.
•  » » » » » Same. It seems that it will work but some more optimizations were required.
•  » » 15 months ago, # ^ | ← Rev. 2 →   I had an O(n^2) solution but saving the suffix in an unordered_map (I know technically its not O(1)) and it TLE. However, with saving the suffix in a vector, it got accepted. Anyone might know, why the unordered_map suddenly drops drastically in performance? I mean if it were a bit worse I would understand but its off by minutes with unordered_map..I don't understand :(Code with unordered_map here: My Codevoid solve() { ll n, goal; cin >> n >> goal; vector nums, pfs{0}; FOR(i,n){ ll num; cin >> num; nums.push_back(num); pfs.push_back(pfs.back() + num); } if(goal == 0){ cout << 0; return; } ll ret = INF; unordered_map complement; complement = 0; for(int i=n; i>0; --i){ for(int j=i-1; j>=0; --j){ ll curQuery = pfs[i]-pfs[j]; if(curQuery > goal)break; if(curQuery == goal) ret = min(ret, (ll)i-j); if(curQuery < goal){ if(complement.count(goal-curQuery)){ ll pot = i-j + complement[goal-curQuery]; if(pot < ret){ ret = min(ret, pot); } } } } // adding all subsequences (suffixes) starting from i for(int j=i; j<=n; ++j){ ll possible = pfs[j] - pfs[i-1]; if(complement.count(possible)){ complement[possible] = min(complement[possible], (ll)j-i+1); }else{ complement[possible] = (ll)j-i+1; } } } if(ret == INF){ cout << -1; }else{ cout << ret; } } 
•  » » » 15 months ago, # ^ | ← Rev. 2 →   'k' is 10^6 and you should only need O(K) memory, but when you write it like this, you're really using O(N^2) memory in the worse case (the sums aren't bounded at all and you're adding each one to the hash map), which is 2.5 * 10^7 longs, that's gonna be super slow in a hashmap. Just add "if(possible > 1e6) continue;" and it'll pass.
•  » » » » 15 months ago, # ^ | ← Rev. 2 →   Good Catch, thank you, I really appreciate your help. I just added it and instead of >3mins it takes 28 seconds (the third test case), whereas with vector it only needs 2.7 seconds. Holy, A huge takeaway for me is, that when possible use vector/arrays.Here with unordered_map (28.5 seconds): unordered_mapvoid solve() { ll n, goal; cin >> n >> goal; vector nums, pfs{0}; FOR(i,n){ ll num; cin >> num; nums.push_back(num); pfs.push_back(pfs.back() + num); } if(goal == 0){ cout << 0; return; } ll ret = INF; unordered_map complement; complement = 0; for(int i=n; i>0; --i){ for(int j=i-1; j>=0; --j){ ll curQuery = pfs[i]-pfs[j]; if(curQuery > goal)break; if(curQuery == goal) ret = min(ret, (ll)i-j); if(curQuery < goal){ if(complement.count(goal-curQuery)){ ll pot = i-j + complement[goal-curQuery]; if(pot < ret){ ret = min(ret, pot); } } } } // adding all subsequences (suffices) starting from i for(int j=i; j<=n; ++j){ ll possible = pfs[j] - pfs[i-1]; if(possible > 1000000) continue; if(complement.count(possible)){ complement[possible] = min(complement[possible], (ll)j-i+1); }else{ complement[possible] = (ll)j-i+1; } } } if(ret == INF){ cout << -1; }else{ cout << ret; } } Here with vector (2.7 seconds): vectorvoid solve() { ll n, goal; cin >> n >> goal; vector nums, pfs{0}; FOR(i,n){ ll num; cin >> num; nums.push_back(num); pfs.push_back(pfs.back() + num); } if(goal == 0){ cout << 0; return; } ll ret = INF; vector complement(1000000, -1); complement = 0; for(int i=n; i>0; --i){ for(int j=i-1; j>=0; --j){ ll curQuery = pfs[i]-pfs[j]; if(curQuery > goal)break; if(curQuery == goal) ret = min(ret, (ll)i-j); if(curQuery < goal){ if(complement[goal-curQuery] != -1){ ll pot = i-j + complement[goal-curQuery]; if(pot < ret){ ret = min(ret, pot); } } } } // adding all subsequences (suffices) starting from i for(int j=i; j<=n; ++j){ ll possible = pfs[j] - pfs[i-1]; if(possible > 1000000) continue; if(complement[possible] != -1){ complement[possible] = min(complement[possible], (ll)j-i+1); }else{ complement[possible] = (ll)j-i+1; } } } if(ret == INF){ cout << -1; }else{ cout << ret; } } 
 » Now Geometry problems are trendy at KickStart.
 » Simple doesn't mean convex, lesson learned.
 » In D, for A=7 and n=5, will it be Impossible ?Are there any other impossible case for n=5 except A<=4
 » Is it possible to use ternary search on cooridinates on problem B ? I'm not sure if it will work or not.
•  » » Yes, ternary search will work.
•  » » yup. I did ternary search on x and y seperately.
•  » » » » codeint check(int mid, vector>&v){ int temp = 0; for(int i = 0; i < v.size(); i++){ if(mid >= v[i].first && mid <= v[i].second) temp+=0; else temp += min(abs(mid-v[i].first), abs(mid-v[i].second)); } return temp; } int ternary_search(int l, int r, vector>&v) { while (l < r) { int m1 = l + (r - l) / 3; int m2 = r - (r - l) / 3; int f1 = check(m1, v); int f2 = check(m2, v); if (f1 > f2) l = m1+1; else r = m2-1; } return l; } int main(){ int n; cin >> n; vector> x, y; int ly = INT_MAX, lx = INT_MAX, ry = INT_MIN, rx = INT_MIN; for(int i = 0; i < n; i++){ int x1, y1, x2, y2; cin >> x1 >> y1 >> x2 >> y2; ly = min(ly, min(y1, y2)); lx = min(lx, min(x1, x2)); rx = max(rx, max(x1, x2)); ry = max(ry, max(y1, y2)); x.emplace_back(min(x1, x2), max(x1, x2)); y.emplace_back(min(y1, y2), max(y1, y2)); } lx--, rx++; int ansx = ternary_search(lx, rx, x); ly--, ry++; int ansy = ternary_search(ly, ry, y); cout << ansx << ' ' << ansy << nl; } 
•  » » $N^2logN$ passes for me. Fixed the sum of the first part and the binary search over all other possible intervals with that required sum to make the sum $K$. Code for reference#include using namespace std; #define nl '\n' const int INF = 1e9 + 7; void run_cases() { int N, K; cin >> N >> K; vector B(N); for(auto &u: B) { cin >> u; } if(accumulate(B.begin(), B.end(), 0LL) < K) { cout << -1 << '\n'; return; } vector>> segments(K + 1); for(int i = 0; i < N; i++) { int64_t sum = 0; for(int j = i; j < N; j++) { sum += B[j]; if(sum <= K) { segments[sum].emplace_back(i, j - i + 1); } else { break; } } } for(int i = 0; i <= K; i++) { for(int j = int(segments[i].size()) - 2; j >= 0; j--) { segments[i][j].second = min(segments[i][j].second, segments[i][j + 1].second); } } int cost = INF; for(auto segment: segments[K]) { cost = min(cost, segment.second); break; } if(cost == 1) { cout << 1 << '\n'; return; } for(int i = 0; i < N; i++) { int64_t sum = 0; for(int j = i; j < N; j++) { sum += B[j]; int need = K - sum; if(need < 0) { break; } int l = -1, r = segments[need].size(); while(r > l + 1) { int m = (l + r) / 2; if(segments[need][m].first <= j) { l = m; } else { r = m; } } if(r < segments[need].size()) { cost = min(cost, j - i + 1 + segments[need][r].second); } } if(cost == 2) { break; } } if(cost == INF) { cost = -1; } cout << cost << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(nullptr); int tests = 1; cin >> tests; for(int test = 1;test <= tests;test++) { cout << "Case #" << test << ": "; run_cases(); } } 
 » 15 months ago, # | ← Rev. 2 →   Great contest! I feel that the constraints were very manageable, allowing me to fully solve every problem with Python (PyPy3). For question C, I had an O(n^2) solution and used an array to memoise results to reduce overhead. Cdef main(): t=int(input()) for testCase in range(1,t+1): n,k=readIntArr() b=readIntArr() if k in b: ans=1 else: p=b.copy() for i in range(1,n): p[i]+=p[i-1] def getRangeSum(l,r): if l-1>=0: return p[r]-p[l-1] else: return p[r] ans=inf leftMinLen=[inf]*(k+1) # leftMinLen[bunches]=minimum number of trees on left for this number of bunches for r1 in range(n): # print('r1:{} leftMinLen:{}'.format(r1,leftMinLen)) rightSum=0 rightLen=0 for r2 in range(r1,n): rightSum+=b[r2] rightLen+=1 leftTarget=k-rightSum if leftTarget<0: break ans=min(ans,rightLen+leftMinLen[leftTarget]) l2=r1 for l1 in range(l2+1): # update the leftMinLen for the additional possibilities ending at l2 leftLen=l2-l1+1 leftSum=getRangeSum(l1,l2) if leftSum<=k: leftMinLen[leftSum]=min(leftMinLen[leftSum],leftLen) if ans==inf: ans=-1 print('Case #{}: {}'.format(testCase,ans)) return import sys # input=sys.stdin.buffer.readline #FOR READING PURE INTEGER INPUTS (space separation ok) input=lambda: sys.stdin.readline().rstrip("\r\n") #FOR READING STRING/TEXT INPUTS. def readIntArr(): return [int(x) for x in input().split()] inf=float('inf') for _abc in range(1): main() For B I actually used a binary search over the gradients (for when it changes from negative to positive) for x and y. If anyone's interested in seeing all of my Python solutions, just search for YMSeah on the scoreboard.