Actually, the post and pre items were done at the same time, after the session. The technique is called a retrospective post-then-pre evaluation. I know it may seem unconventional, but it developed in response to the “response shift bias” that evaluators of social programs often encountered.
People tend to rate themselves highly, until they know what they don’t know. For instance, if you ask rural teenage moms how they are as parents on a 1 to 5 scale, they might rate themselves as 4s and 5s on a pre survey.
But then, after going through a course on proper parenting, they start to realize all the things they ought to be doing, but aren’t. So they rate themselves as 2s and 3s on the post survey.
It starts to appear as if the the course made them worse parents, when in fact, they’ve learned a lot and their intentions have shifted quite a bit. To solve for this, some program evaluations started asking both sets of questions at the end.
More information here: https://fyi.extension.wisc.edu/programdevelopment/files/2016/04/Tipsheet27.pdf
And no, we didn’t use ratings from others immediately, but we did inform the learning (and the eval) based on specific engagement drivers. Later on, we followed up with engagement surveys to demonstrate that the managers that took part in the training improved in those areas.