This work investigates how the design of crowdsourced tasks can influence responses. As a formative line of inquiry, this study sought to understand how users would respond either through movement, response, or shift of focus to varying flight paths from a drone. When designing an experiment, running several proto-studies can help with generating a dataset that is actionable, but it has been unclear how differences in things such as phrasing or pre- and post-surveys can impact the results. Leveraging methods from psychology, computer-supported cooperative work, and the human-robot interaction communities this work explored the best practices and lessons learned for crowdsourcing to reduce time to actionable data for defining new communication paradigms. The lessons learned in this work will be applicable broadly within the human-robot interaction community, even outside those who are interested in defining flight paths, because they provide a scaffold on which to build future experiments seeking to communicate using non-anthropomorphic robots. Important results and recommendations include: increased negative affect with increased question quantity, completion time being relatively consistent based on total number of responses rather than number of videos, responses being more related to the video than the question, and necessity of varying question lengths to maintain engagement.