Instructions:
Choose your current research development phase below, insert the relevant information into the input fields, and generate a customized prompt. Copy the result and use it with any large language model (ChatGPT, Claude, Gemini, etc.) to get targeted feedback on your work.
Note that pasting your entire relevance table may affect formatting. Pay attention that your submission remains complete and correctly structured. For more reliable results, filling out the structured fields is recommended.
Note that pasting your full literature review structure may affect formatting. Pay attention that your submission remains complete and correctly structured. For more reliable results, filling out the structured fields is recommended.
You can use this prompt to have AI check your submission's formal formatting. Use it with any AI chatbot by uploading your document (ideally in PDF) alongside this prompt. You may remove personal information from your cover page before uploading, though your final submission must include a complete cover page.
This tool provides carefully crafted artificial intelligence (AI) prompts that encode how we approach research design and structure at EcoS, allowing you to use any AI large language model (LLM) to receive program-specific feedback before you submit your research project assignments the professors.
The key difference from generic AI interactions lies in specificity and testing. When you ask an AI chatbot a general question like "Is my research question good?", the response you will receive will be based on the AI’s own assessment, which has no clear foundation of what actually constitutes a ‘good’ research question. Our prompts, on the other hand, specify exactly what to look for and how to evaluate each components of the research project development process by channeling the AI's pattern-matching capabilities toward our specific program requirements.
We spent extensive time developing and refining these prompts, testing various versions to ensure they generate useful, program-relevant responses rather than generic feedback. We decided to take this prompt-approach because it is future-proof and versatile—as AI technology evolves, you can use these prompts with any emerging AI model and get feedback that aligns with our academic standards.
We developed this tool as an educational resource for EcoS students, recognizing that AI has already transformed academic work and many students are using various AI tools. Rather than leaving students to navigate AI feedback without guidance, we wanted to channel that usage in a productive, program-specific direction.
The goal is not to do your intellectual work for you, but to help ensure your research serves your intellectual goals while aligning with sound methodological principles. We believe that by providing access to this type of AI assistance, we not only create something useful but also something that will benefit all students equally, regardless of their prior experience using AI LLMs.
No, you are not required to use these prompts. You can still develop your research projects by using existing guidance materials and ask for direct feedback from professors. However, we developed these prompts because we saw an opportunity to harness emerging technology for genuine educational benefit—providing round-the-clock feedback that democratizes access to detailed academic support.
Professors will still expect you to demonstrate deep understanding in consultations, presentations, and thesis defenses. The prompts help you prepare for these interactions by ensuring your work meets basic requirements, allowing discussions with professors to focus on substantive theoretical nuances and methodological considerations that require genuine expertise.
These prompts help you catch fundamental requirement issues early and avoid unproductive research directions, allowing you to focus energy on original thinking and rigorous analysis. By identifying potential problems before you have invested significant time, you can redirect efforts toward more promising directions—particularly valuable during early research stages when adjustments are still feasible.
Beyond generating assessments, you can ask follow-up questions about feedback, request explanations of concepts you do not understand, and use AI to clarify methodological considerations. When your work already meets basic requirements and demonstrates manageability, you can dedicate consultation time with professors to sophisticated research aspects that require genuine understanding and experience.
We have identified several key practices that will optimize your experience with this tool:
Start early in your development process rather than using this as a last-minute check. If you discover fundamental issues right before a submission deadline, you will not have time to address them properly. Remember—this is your research project, and the goal is to improve your understanding of how to best approach it, and not simply generate solutions that satisfy an AI system.
Engage critical thinking throughout the process. AI is not perfect, and although these prompts have been tested thoroughly, the feedback generated can vary. Make sure that when you prepare your initial drafts you genuinely understand your core ideas, and then use the generated feedback as a starting point for deeper reflection about your work rather than definitive judgment. Remember that the goal is enhancement, not replacement.
Take advantage of AI's broader capabilities beyond evaluating submissions. Given the general capabilities of AI, you can ask follow-up questions about specific feedback, request explanations of concepts you do not understand, or request to engage in your preferred language if English is not native to you.
Prepare for professor consultations by using feedback to ensure your work meets basic requirements and demonstrates general manageability. Do not get too focused on the specifics of the AI feedback. Remember that you can (and should!) discuss any uncertainties about specifics with professors who have the necessary experience in both supervising and writing research papers. In fact, given AI’s structural limitations, judgments about the bigger picture of your research projects require contextual reasoning and genuine understanding that only experienced researchers can provide.
It is crucial to understand that AI generates text based on patterns from training data—it does not actually "know" anything the way humans do. When AI evaluates your work, it makes probability-based predictions about likely responses rather than systematically analyzing like a real researcher would. So, for instance, if you ask AI whether there is 'enough' literature on a certain research topic, it might confidently confirm, although it doesn't perform an actual survey of databases or systematic evaluation.
Even with carefully crafted prompts, different AI tools can provide varying assessments of identical work, and the same tool can generate different feedback in new sessions. This means you should never simply accept or reject AI feedback without consideration. Instead, use it as stimulus to better understand and critically examine your own work.
The value lies in the reflection process the feedback encourages, not in AI's specific conclusions. If feedback seems wrong or unhelpful, that's an opportunity to think more deeply about whether aspects of your work could be clearer or stronger. But don’t let it discourage you. Trust your own judgment, but let feedback prompt you to consider perspectives you might not have examined independently.