Here is the Link for the problem for those who are just surfing the forum randomly
DIFFICULTY Easy-Medium
PRE-REQUISITE
- Segment Tree 2) Sieve of Eratosthenes
PROBLEM:
Given an array A of N positive integers, we are expected to perform two type of queries GET and UPDATE, description of these queries are given in the problem.
EXPLANATION We are required to use LeastPrimeDivisor(x)[LPD] in both of our queries, so a pre-processing step can be to evaluate and store LPD of all the Ai’s in an array, say Sieve[], Since |Ai| <= 10^6
if (you are new to sieve):
If (you are new to RMQ and Segment tree):
Topcoder Tutorial
Hackerearth Tutorial
RMQSQ SPOJ
GET(L,R)
query asks us to return the maximum of LPD of all Ai in range [L, R], equivalently its a Range-Max query which can be done using segment tree where each internal node of segment tree holds the maximum of LPD of its two children. So first build a segment tree which support range-max query and return max(LPD) . Since we already have LPD for all element in sieve, this step isn’t much of an issue. see code for more clarification.GET(L, R) takes O(logN) time per call as usual.
UPDATE(L, R)
As you can expect is our Range-Update query, seeing range updates we might be tempted to use lazy propagation at first, but thinking carefully, we can notice that lazy terms provides no useful information on how to update a node’s for its maximum value of LPD. So we are better with point updates.But the problem with point updates is they take O(logn) per call, and calls in our worst case can be N for **UPDATE(1, N)**naively, doing point updation on this range would give O(NlogN) time per query or equivalently O(M*NlogN) = O(N^2logN) for M of such queries, which will certainly time out.
OBSERVATION
If you watch closely, each element in the range [L, R] of the array A is being reduced by a factor of its LPD, also after an element is reduced to 1 it remains 1 forever. the limit for an element in our case is 10^6 which is roughly 2^20, Thus we are assured that each of our element would become 1 after going through at max 20 updates(convince yourself). Thus we can stop updating those element which are already reduced to 1, so our worst case time complexity for M UPDATE(1, N) queries reduces to O(20*NlogN)(independent of M)
ANALYSIS
Pre-Processing time in sieving : O(NloglogN)
building segment tree : O(4N) ~ O(N)
GET(L, R) : O(log(N))
UPDATE(L, R) : O(20NlogN)
Total Time Complexity : O(NloglogN + N + MlogN + 20*NlogN) ~ O((M+N)logN)
A similar question that you can try is GSS4 SPOJ this is a classical SPOJ problem and nearly everyone who has ever got to know about segment tree knows about it, so you can easily guess why there were so much of AC’s for that problem.
NOTE
Actually in practice the update operation can be much faster than O(20*NlogN). if you implement UPDATE(L, R) operation by doing naive point update over the range(L, R) then surely its going to take O(NlogN) time per operation but if you implement UPDATE function like the one to BUILD the segment tree then each UPDATE(L, R) will take O(R-L+1) time, if R-L+1 dominates logN, otherwise it would be logN. Our worst case would be the same as all the query can be of type UPDATE(L, L)(here logN dominates R-L+1 term) in which case the operation will take O(logN) time, same as naive point update, comment if you need more on this.In the analysis, to make thing worst for us i assumed all the queries are of GET type and we still account for all the possible UPDATE operations.