There is a view of exploratory testing that believes it is spontaneous (as in combustion) - one just turns up at a keyboard (or other lab/test equipment) and you "just do it"!
Nice, if you can get it.
But what if your system under test is not trivial - in terms of data set-up, configuration, simulated subsystems, user profiles, behaviour or network elements? Does this mean that you can't have an exploratory approach?
Well, of course you can.
Think of it as going on a walk/hike/trek in an area you haven't covered before. To actually get to the "start" of the hike you might use a series of public or scheduled transport (a pre-determined schedule) to get to the start point.
I emphasize the uncertainty around the "start" as you might move this depending on how you defined where the trip should start.
You might decide to start your walk when stepping off of the bus - the bus is a pre-determined script to get you to a point where you will start - but perhaps you notice something odd on the journey; maybe the bus takes a route that isn't signposted as your destination - is it a pre-determined route or a diversion, will the journey take longer ... The questions start.
Using a script to get to your starting point doesn't mean that you can't ask questions - remember, this is testing with your eyes wide open as opposed to eyes wide shut. You finally get to your start point - where the trek can begin.
Just like an exploratory approach - there might be a whole series of set-up and configuration to get the system into a state ready to start working with your test ideas. It might be that you need to construct a simulation of a network element to start testing in the desired area. All of this may take some time to achieve - a whole series of hoops and hurdles.
In some of the systems that I work with this is exactly what we do - it's about a certain design (simulation, config and test ideas) - getting to the starting point where we can start working with our test ideas.
Sometimes the test ideas yield more investigative tracks than others - but that is exactly where we "intend" the exploration/testing to start. But like any (good) tester we notice things on the way, we raise questions about our simulation assumptions, about the problem/test space in our target area - and we learn about the product (along the way and in our target areas) - we learn about our test ideas, we learn about our test ideas, if we're missing something vital in our simulation data, configuration or how the whole might relate to a real end-user.
That was our walk in the woods, and the scripted element is very much a part of the trek/exploration. The scripted part is setting the conditions for the exploration - an enabler - whether I am questioning the set-up as we go is a matter of preference, time, priorities and feedback from previous testing.
Do you observe or get curious on the way to your walk in the woods?