if there is a matrix A[][] of order m and another matrix B[][] of order n such that (m>n) you have to find the occurrence of matrix B[][] in matrix A[][].
Baker-Bird algorithm is what you looking for. In a brief: you split matrix B on separate rows and obtain n independent patterns, you want to look in matrix A. Now you want for each position in A compute, whether there ends some row of matrix B. You can store it in some integer matrix C such as on position C[i][j] is number k if on position A[i][j] is k-th row of matrix B. This can be easily done with Aho-Corasick algorithm.
Now when you fill matrix C, problem is really simple. You just want to find column in this matrix, that there is sequence 1, 2, 3 … n. This means, that you find occurence of B in A. Total time complexity is O(n^2 + m^2) which is optimal.
But this is pretty nasty and I assume, that rows of B are distinct. If not, it’s still possible to solve, but in matrix C you don’t look for 1, 2 … n but something like 1, 2, 1… if first and third rows of B are equal and you must add KMP algorithm to it.