There is currently a limit to how much data can be analyzed in one go in Gavagai Explorer. The limit is around 10'000 responses. The latest version of Explorer was created to address this limitation, however, so the new architecture of the system is capable of handling any size data as long as we scale our servers accordingly (our initial goal is to be able to handle 150k verbatims in one go). Our intention is to scale according to customer demand and I am confident that we will be able to do so. For now I suggest you reduce the size of the data set if possible (you can use filters to slice the data on any of the other meta-data, or columns, in your input file; if you are just testing the functionality of Explorer you can probably get a good idea of how it works with a smaller data set) . And please tell us what your demands are and we'll talk about how to meet them.