BLocked website long TLE

LInk to problem :

My sol :

In subtask 2 only 13,14 are showing TLE

It shows TLE, How can i optimize it further??

EDIT : I am trying to put (+) sites in one structure and (-) sites in another structure

and for each (-) site i am comparing all (+) sites and then putting longest continous pattern for a (-) site

from all (+) site to another structure.

In worst case we can have 10^3 (+) sites and 10^2 (-) sites making time complexity O(10^5).

So why i am getting TLE?

If you can try explaining your approach, thinking and what you intended to do, i think you will receive the answer MUCH quicker. It would help many people in understanding your code, which might else take considerable time for them. :slight_smile:

BTW, roughly, even i faced same problem. 2 of the test cases showing TLE. The number of websites (value on N) in those is ~3 x 10^4 to 5 x 10^4 . So accordingly if you have any algo which depends on number of strings in both and comparisons among each of them, as well as their length, then it will time out.


Lets say test case has 4 x 10^4 websites, half good, half bad.

So we got, 2x10^4 bad and 2x10^4 good websites.

If you are comparing each good website to each bad one, then there are atleast 4 x 10^8 comparisons. And if it involves string length too, then its 2 x 10^9 operations. I think it will give a TLE.

This question can be easily solved using Trie data structure. You can study it here:

Ya i know it can be solved using tries but i am asking Is there any way to optimize my code/approach??

There is certainly a way to solve this problem without using tries.
You can store blocked and unblocked sites separately (which as you say you did).
You can now try to optimize this further.

  1. We observe that for every blocked site, we only need to compare it with those unblocked sites that start with the same letter. So you can use a vector of vectors to achieve this.

  2. Sort the websites in both structures. While you iterate through the unblocked websites (for a particular blocked website), you keep track of the maximum number of common occurences, and once the number of occurences becomes lesser than the current maximum, you can stop your search for the current blocked website. This works because you’ve sorted the websites.

These optimizations should be enough to pass the required time constraint.
Here is my solution for reference - link text


this is my approach to solve this problem using trie and using dfs to display prefixes of blocked sites

I solved it using Trie data structure and DFS. Here’s the code for the same.

That is the answer I was looking for…
I have stored blocked,unblocked sites seperately.
I have also compared first letter by hashing ,just didn’t sort the structures
Btw thanks!!!