An award-winning cannabis podcast for women, by women. Hear joyful stories and useful advice about cannabis for health, well-being, and fun—especially for needs specific to women like stress, sleep, and sex. We cover everything from: What’s the best weed for sex? Can I use CBD for menstrual cramps? What are the effects of the Harlequin strain or Gelato strain? And, why do we prefer to call it “cannabis” instead of “marijuana”? We also hear from you: your first time buying legal weed, and how ...
…
continue reading
Player FM - Internet Radio Done Right
Checked 16d ago
one 年前已添加!
内容由Craig Van Slyke提供。所有播客内容(包括剧集、图形和播客描述)均由 Craig Van Slyke 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Player FM -播客应用
使用Player FM应用程序离线!
使用Player FM应用程序离线!
值得一听的播客
赞助
M
Mind The Business: Small Business Success Stories


1 Understanding Taxes as a Newly Formed Small Business - Part 2 of the Small Business Starter Kit 28:24
In our second installment of the Small Business Starter Kit series - we’re tackling a topic that’s sometimes tricky, sometimes confusing, but ever-present: taxes. Hosts Austin and Jannese have an insightful conversation with entrepreneur Isabella Rosal who started 7th Sky Ventures , an exporter and distributor of craft spirits, beer, and wine. Having lived and worked in two different countries and started a company in a heavily-regulated field, Isabella is no stranger to navigating the paperwork-laden and jargon-infused maze of properly understanding taxes for a newly formed small business. Join us as she shares her story and provides valuable insight into how to tackle your business’ taxes - so they don’t tackle you. Learn more about how QuickBooks can help you grow your business: QuickBooks.com See omnystudio.com/listener for privacy information.…
AI Goes to College
标记全部为未/已播放
Manage series 3563098
内容由Craig Van Slyke提供。所有播客内容(包括剧集、图形和播客描述)均由 Craig Van Slyke 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.
…
continue reading
21集单集
标记全部为未/已播放
Manage series 3563098
内容由Craig Van Slyke提供。所有播客内容(包括剧集、图形和播客描述)均由 Craig Van Slyke 或其播客平台合作伙伴直接上传和提供。如果您认为有人在未经您许可的情况下使用您的受版权保护的作品,您可以按照此处概述的流程进行操作https://zh.player.fm/legal。
Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.
…
continue reading
21集单集
所有剧集
×
1 AI's Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education 36:25
In a timely discussion, Craig Van Slyke and Robert E. Crossler discuss the latest advancements in generative artificial intelligence, with a particular focus on the unveiling of Claude Sonnet 3.7. This development has prompted a wave of excitement and speculation regarding its implications for the future of programming. The hosts articulate their observations on how this model could revolutionize the way coding is approached, potentially rendering traditional entry-level programming roles obsolete while enhancing the efficiency of seasoned professionals. This raises critical questions about the evolving nature of job markets and the skills required in the face of such technological advancements. As the dialogue unfolds, the hosts transition to a discussion on the ethical and educational ramifications of integrating AI into academic environments. They express concerns regarding the diminishing emphasis on critical thinking skills, particularly among students who may rely heavily on AI-generated outputs. Van Slyke and Crossler emphasize the necessity for educators to not only familiarize themselves with these technologies but also to instill a sense of skepticism and analytical rigor in their students. This approach is vital for ensuring that future professionals are equipped to discern and evaluate the information generated by AI, fostering a culture of informed decision-making and innovation. Van Slyke and Crossler offer some interesting ways in which AI can be used to help students improve their critical thinking skills. The hosts also discuss how new AI tools, such as OpenAI's ChatGPT Deep Research may reshape the way in which academic research is done, for faculty and students. Higher ed professionals may need to rethink the very purpose of learning activities such as research papers. The episode concludes with a call to action for higher education institutions, urging them to rethink their pedagogical strategies in light of the rapid proliferation of AI technologies. By fostering a collaborative and adaptive educational environment, educators can empower students to harness the capabilities of generative AI responsibly, thereby paving the way for a future where technology and critical thinking coexist in ways that enhance critical thinking skills. Takeaways: The recent advancements in generative AI, particularly Claude Sonnet 3.7, have significant implications for coding practices across various disciplines. There exists a growing concern amongst educators regarding the potential displacement of entry-level programming jobs due to the capabilities of generative AI technologies. It is essential for higher education institutions to adapt their pedagogical approaches to effectively integrate generative AI into the curriculum for enhanced critical thinking. Generative AI tools can serve as valuable resources for academic research, but they must be used carefully to avoid over-reliance and ensure the integrity of scholarly work. The conversation around generative AI's impact on critical thinking skills reveals a dual potential for either degradation or enhancement based on how these tools are utilized. Educators need to cultivate a deeper understanding of generative AI technologies to guide students in their effective and ethical use in academic contexts. Companies mentioned in this episode: Anthropic OpenAI Microsoft Peapod Doordash Uber Eats Walmart Chewy Mentioned in this episode: AI Goes to College Newsletter…
In this wide-ranging discussion, Craig Van Slyke and Robert E. Crossler explore recent AI developments and tackle the fundamental challenges facing higher education in an AI-enhanced world. They begin by examining GPT Tasks, highlighting practical applications like automated news summaries and scheduled tasks, while sharing personal experiments that demonstrate the importance of playful exploration with new AI tools. The conversation then turns to Gemini's new fact-checking features, with important cautions about source verification and the need to balance convenience with critical evaluation of AI-generated content. The hosts have an engaging discussion about the challenge of "transactional education" - where learning has become a points-for-grades exchange - and explore alternative approaches like mastery-based learning and European assessment models. They discuss concrete strategies for moving beyond traditional grading schemes, including reducing assignment volume and focusing on process over outcomes. The episode concludes with an announcement of an upcoming repository for AI-enhanced teaching activities and a call for educators across disciplines to share their innovative approaches. Key Takeaways: GPT Tasks enables automated, scheduled AI interactions - from news summaries to daily content delivery Gemini's new fact-checking feature provides source verification but requires careful evaluation of source credibility Deepseek, the new AI model, is worth checking out but be aware of privacy concerns The challenge of "transactional education" requires rethinking traditional assessment methods Practical alternatives to points-based grading include focusing on mastery and reducing assignment volume Faculty across disciplines are invited to contribute to a new repository of AI-enhanced teaching activities Outline GPT Tasks and Functionalities AI agents conducting various tasks. Use case: AI travel planner and email agent. Development of GPT tasks to run scheduled prompts. Example uses: Receiving AI and higher ed news updates. Daily dad joke feature. Exploration of New AI Tools Importance of experimenting with AI tools for learning. Application play and psychological basis for learning technology. Encouragement to try new tools without overcomplicating the process. Comparison of Search Tools Comparing ChatGPT with Google alerts. Tailoring information relevance and accuracy. Importance of validating AI-generated information. Privacy and Availability of AI Tools Availability limited to certain user levels and regions. Variability in tool features across different platforms. DeepSEEK: A New AI Model Introduction and capability of DeepSEEK. Cost efficiency and openness as an open-source model. Privacy concerns related to data sharing and Chinese government access. Impact on NVIDIA stock and benchmarks comparison. Open Source and Computational Needs Role of open source in future AI model development. Computational requirements and challenges with local running versions. Privacy and Intellectual Property Concerns Distinction between privacy and intellectual property. Concerns about research data compliance and institutional rules. Writing with AI Tools Differentiating writing and editing. Using AI to enhance rather than replace human creativity. Increase in writing quality in published work. Transactional Education Model Challenges posed by transactional nature of education. Importance of mastery and process focus in learning. Discussions on final exams vs. continuous assessment. Proposed Repository for Active Learning Activities Building a shared resource for educators. Encouragement for community contribution and collaboration. Long-term vision for enhancing educational engagement with AI. Conclusion and Call for Interaction Inviting listeners to contribute ideas and share practices. Encouragement for community-driven improvement in AI-integrated education. Time Stamps: [00:00] Introduction and GPT Tasks discussion [15:45] Gemini's new features and source verification [25:20] Writing process and AI tools [35:10] Transactional education challenges [45:00] Announcement of teaching activity repository Links ChatGPT: https://chat.openai.com/ Gemini: https://gemini.google.com/ DeepSeek: https://deepseek.ai/ Anthropic: https://www.anthropic.com/ Claude: https://www.anthropic.com/claude Llama: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/ (This links to a blog post about Llama, as there isn't a dedicated website for the model itself.) Mentioned in this episode: AI Goes to College Newsletter…
AI hallucinations, or confabulations, can actually foster scientific innovation by generating a wealth of ideas, even if many of them are incorrect. Craig Van Slyke and Robert E. Crossler explore how AI's ability to rapidly process information allows researchers to brainstorm and ideate more effectively, ultimately leading to significant breakthroughs in various fields. They discuss the need for a shift in how we train scientists, emphasizing critical thinking and the ability to assess AI-generated content. The conversation also touches on the potential risks of AI in education, including the challenge of maintaining student engagement and the fear of students using AI to cheat. As they dive into the latest tools like Google's Gemini and NotebookLM, the hosts highlight the importance of adapting teaching methods to leverage AI's capabilities while ensuring students develop essential skills to thrive in an AI-augmented world. The latest podcast episode features an engaging discussion between Craig Van Slyke and Robert E. Crossler about the impact of AI on innovation and education. They dive into the concept of AI hallucinations and confabulations, noting that while these outputs may be inaccurate, they can spark creative thinking and lead to valuable scientific breakthroughs. Crossler emphasizes that trained scientists can sift through these AI-generated ideas, helping to separate the wheat from the chaff. This perspective reframes the way we view AI's role in generating new knowledge and highlights the importance of human expertise in guiding this process. As the dialogue progresses, the hosts address the implications of AI on educational practices. They express concern about the reliance on self-directed learning, noting that many students struggle to engage deeply without structured support. Van Slyke and Crossler advocate for a reimagined educational framework that incorporates AI tools, encouraging educators to foster critical thinking and analytical skills. By challenging students to interact with AI outputs actively, such as critiquing AI-generated reports or creating quizzes based on their work, instructors can ensure that learning is meaningful and substantive. The episode also explores practical applications of AI tools like Google’s Gemini and NotebookLM for enhancing educational experiences. They discuss how these tools can facilitate research and content creation, making it easier for students to engage with complex topics. However, they also acknowledge the potential for misuse, such as cheating. The hosts argue that by redesigning assignments to focus on critical engagement with AI-generated content, educators can mitigate these risks while enriching the learning process. In summary, the episode provides a thought-provoking examination of how AI can both challenge and enhance the educational landscape, urging educators to adapt their approaches to prepare students for a future where AI is an integral part of knowledge acquisition. Takeaways: AI hallucinations, referred to as confabulations, can stimulate scientific innovation by generating diverse ideas. The rapid consumption of information by AI accelerates connections that human scientists might miss. Future scientists must adapt their training to critically assess AI-generated confabulations for practical use. Education needs to evolve to help students engage with AI as a tool for learning. Using AI tools in the classroom can enhance critical thinking skills and analytical abilities. Collaboration among educators is essential to share effective strategies for utilizing AI technologies. Links 1. New York Times article: https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html 2. Poe.com voice generators: https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj 3. Gemini Deep Research: https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj 4. Notebook LM and audio overviews: https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview Mentioned in this episode: AI Goes to College Newsletter…
This episode of AI Goes to College discuss the practical applications of generative AI tools in academic research, focusing on how they can enhance the research process for higher education professionals. Hosts Craig Van Slyke and Robert E. Crossler discuss three key tools: Connected Papers, Research Rabbit, and Scite_, highlighting their functionalities and the importance of transparency in their use. They emphasize the need for human oversight in research, cautioning against over-reliance on AI-generated content, as it may lack the critical thought necessary for rigorous academic work. The conversation also touches on the emerging tool NotebookLM, which allows users to query research articles and create study guides, while raising ethical concerns about data usage and bias in AI outputs. Ultimately, Craig and Rob encourage listeners to explore these tools thoughtfully and integrate them into their research practices while maintaining a critical perspective on the information they generate. --- The integration of generative AI tools into academic research is an evolving topic that Craig and Rob approach with both enthusiasm and caution. Their conversation centers around a recent Brown Bag series at Washington State University, where Rob's doctoral students showcased innovative AI tools designed to assist in academic research. The discussion focuses on three tools in particular: Connected Papers, Research Rabbit, and Scite_. Connected Papers stands out for its transparency, utilizing data from Semantic Scholar to create a visual map of related research, which aids users in finding relevant literature. This tool allows researchers to gauge the interconnectedness of papers and prioritize their reading based on citation frequency and relevance. In contrast, Research Rabbit's lack of clarity regarding its data sources and the meaning of its visual representations raises significant concerns about its reliability. Rob's critical assessment of Research Rabbit serves as a cautionary tale for researchers who might be tempted to rely solely on AI for literature discovery. He argues that while tools like Research Rabbit can provide useful starting points, they often fall short of the rigorous standards required for academic research. The hosts also discuss Cite, which generates literature reviews based on user input. Although Cite can save time for researchers, both Craig and Rob emphasize the necessity of critical engagement with the content, warning against over-reliance on AI-generated summaries that may lack depth and nuance. Throughout the episode, the overarching message is clear: while generative AI can enhance research efficiency, it cannot replace the need for critical thinking and human discernment in the research process. Craig and Rob encourage their listeners to embrace these tools as aides rather than crutches, fostering a mindset of skepticism and inquiry. They underscore the importance of maintaining academic integrity in the face of rapidly advancing technology, reminding researchers that their insights and interpretations are invaluable in shaping the future of scholarship. By the end of the episode, listeners are equipped with practical advice on how to navigate the intersection of AI and research, ensuring that they harness the power of these tools responsibly and effectively. Takeaways: Generative AI tools can help streamline academic research but should not replace critical thinking. Connected Papers offers transparency in sourcing research papers, unlike some other tools. Students must remain skeptical of AI outputs, ensuring they apply critical thought in research. Tools like NotebookLM can assist in summarizing and querying research articles effectively. Using AI can eliminate busy work, allowing researchers to focus on adding unique insights. Educators need to guide students on how to leverage AI tools responsibly and ethically. Link to Craig's Notebook LM experiment description: https://aigoestocollege.substack.com/p/is-notebook-lm-biased Links referenced in this episode: Notebook LM: notebooklm.google.com AI Goes to College: aigostocollege.com Google Learn About: learning.google.com Connected Papers: https://www.connectedpapers.com/ Scite_: https://scite.ai/ Research Rabbit: https://www.researchrabbit.ai/ Mentioned in this episode: AI Goes to College Newsletter…
Generative AI is reshaping the landscape of higher education, but the introduction of AI detectors has raised significant concerns among educators. Craig Van Slyke and Robert E. Crosler delve into the limitations and biases of these tools, arguing they can unfairly penalize innocent students, particularly non-native English speakers. With evidence from their own experiences, they assert that relying solely on AI detection tools is misguided and encourages educators to focus more on the quality of student work rather than the potential use of generative AI. The conversation also highlights the need for context and understanding in assignment design, suggesting that assignments should be tailored to class discussions to ensure students engage meaningfully with the material. As generative AI tools become increasingly integrated into everyday writing aids like Grammarly, the lines blur between acceptable assistance and academic dishonesty, making it crucial for educators to adapt their approaches to assessment and feedback. In addition to discussing the challenges posed by AI detectors, the hosts introduce Beautiful AI, a powerful slide deck creation tool that leverages generative AI to produce visually stunning presentations. Craig shares his experiences with Beautiful AI, noting its ability to generate compelling slides that enhance the quality of presentations without requiring extensive editing. This tool represents a shift in how educators can approach presentations, allowing for a more design-focused experience that can save significant time. The episode encourages educators to explore such tools that can streamline their workflows and improve the quality of their output, ultimately promoting a more effective use of technology in educational settings. The discussion culminates with a call for educators to embrace generative AI not as a threat but as a resource that can enhance learning and teaching practices. Takeaways: AI detectors are currently unreliable and can unfairly penalize innocent students. It's essential to critically evaluate their results rather than accept them blindly. The biases in AI detectors often target non-native English speakers, leading to unfair accusations of cheating. Generative AI tools can enhance the quality of writing and presentations, making them more visually appealing and easier to create. Beautiful AI can generate visually stunning slide decks quickly, saving time while maintaining quality. Using tools like Gemini can significantly streamline the process of finding accurate information online, offering a more efficient alternative to traditional searches. Educators should contextualize assignments to encourage originality and understanding, rather than relying solely on AI detection tools. Links referenced in this episode: gemini.google.com beautiful.ai Companies mentioned in this episode: Grammarly Shutterstock Beautiful AI Google Wright State University WSU Gemini Mentioned in this episode: AI Goes to College Newsletter…
Craig and Rob dig into the innovative features of Google's Notebook LM, a tool that allows users to upload documents and generate responses based on that content. They discuss how this tool has been particularly beneficial in an academic setting, enhancing students' confidence in their understanding of course materials. The conversation also highlights the importance of using generative AI as a supplement to learning rather than a replacement, emphasizing the need for critical engagement with the technology. Additionally, they share their personal AI toolkits, exploring various tools like Copilot, ChatGPT, and Claude, each with unique strengths for different tasks. The episode wraps up with a look at specialized tools such as Lex, Consensus, and Perplexity AI, encouraging listeners to experiment with these technologies to improve their efficiency and effectiveness in academic and professional environments. Highlights : 00:17 - Exploring Google's Notebook LM 01:25 - Rob's Experience with Notebook LM in Education 02:05 - The Impact of Notebook LM on Student Learning 04:00 - Creating Podcasts with Notebook LM 05:35 - Generative AI and Student Engagement 11:03 - Personal AI Toolkits: What's in Use? 11:10 - Comparing Copilot and ChatGPT/Claude 06:00 - The Unpredictability of AI Responses 09:35 - Innovative Uses of Generative AI 26:55 - Specialized AI Tools: Perplexity and Consensus 37:22 - Conclusion and Encouragement to Explore AI Tools Products and websites mentioned Google Notebook LM: https://notebooklm.google.com/ Perplexity.ai: https://www.perplexity.ai/ Consensus.app: https://consensus.app/search/ Lex.page: https://lex.page/ Craig's AI Goes to College Substack: https://aigoestocollege.substack.com/ Mentioned in this episode: AI Goes to College Newsletter…
This episode of AI Goes to College explores the transformative role of generative AI in higher education, with a particular focus on Microsoft's Copilot and its application in streamlining administrative tasks. Dr. Craig Van Slyke and Dr. Robert E. Crossler share their personal experiences, highlighting how AI tools like Copilot can significantly reduce the time spent on routine emails, agenda creation, and recommendation letters. They emphasize the importance of integrating AI tools into one's workflow to enhance productivity and the value of transparency when using AI-generated content. The episode also explores the broader implications of AI adoption in educational institutions, noting the challenges of choosing the right tools while considering privacy and intellectual property concerns. Additionally, the hosts discuss the innovative potential of AI in transforming pedagogical approaches and the importance of students showcasing their AI skills during job interviews to gain a competitive edge. In this insightful discussion, Dr. Craig van Slyke and Dr. Robert E. Crossler explored the transformative potential of generative AI in higher education. Drawing from their extensive experience, they examined how Microsoft's Copilot can alleviate the administrative burdens faced by educators. Dr. Crossler shared his firsthand experience with Copilot's ability to draft emails and create meeting agendas, highlighting the significant time savings and productivity gains for academic professionals. This practical use of AI allows educators to redirect their efforts towards more meaningful tasks such as curriculum development and student engagement. The hosts also addressed the information overload surrounding AI advancements, advising educators to focus on tools that offer tangible benefits rather than getting caught up in the hype. They discussed the strategic decisions universities face in selecting AI technologies, emphasizing the need for thoughtful integration to maximize educational impact. This conversation underscored the necessity for higher education institutions to remain agile and informed as they navigate the evolving landscape of AI technologies. Further, the episode examined AI tools like Claude and Gemini, showcasing their potential to enhance both academic and personal productivity. Claude's artifact feature was highlighted for its ability to organize AI-generated content, providing a structured approach to integrating AI solutions in educational tasks. Meanwhile, Gemini's prowess in tech support and everyday problem-solving was noted as a testament to AI's versatility. The hosts concluded with advice for students entering the job market, encouraging them to leverage their AI skills to gain a competitive edge in their careers. Takeaways: Generative AI tools can substantially reduce the time spent on routine tasks like email writing. Higher education professionals can leverage AI for tasks such as creating meeting agendas and recommendations. Using AI requires a shift in how tasks are approached, focusing more on content creation. Schools may need to decide which AI tools to support based on their specific needs. AI tools like Microsoft Copilot can assist in writing by offering different styles and tones. Experimentation with AI in professional settings can lead to significant productivity improvements. The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( https://aigoestocollege.substack.com/ ). Both are available at https://www.aigoestocollege.com/ . Do you have comments on this episode or topics that you'd like us to cover? Email Craig at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/ .…
Is ChatGPT bull ...? Maybe not. In this episode Rob and Craig talk about how generative AI can be used to improve communication, give their opinions of a recent article claiming that ChatGPT is bull$hit, and discuss why you need an AI policy. Key Takeaways: AI can be used to improve written communication, but not if you just ask AI to crank out the message . You have to work WITH AI. Rob gives an interesting example of how AI was used to write a difficult message. The key is to co-produce with AI, which results in better outcomes than if either the human or the AI worked alone. Is ChatGPT Bull$hit? A recent article in Ethics and Information Technology claims that ChatGPT (and generative AI more generally) is bull$hit. Craig and Rob aren't so sure, although the authors make some reasonable points. You need an AI policy, even if your college doesn't have one yet. Not only does a policy help you manage risk, a clear policy is necessary to help students understand what is, and is not acceptable. Otherwise, students are flying blind. Hicks, M.T., Humphries, J. & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(38). https://doi.org/10.1007/s10676-024-09775-5 https://link.springer.com/article/10.1007/s10676-024-09775-5 Mentioned in this episode: AI Goes to College Newsletter…
In this episode of AI Goes to College, Craig and Rob dig into the transformative impact of artificial intelligence on higher education. They explore three critical areas where AI is reshaping the academic landscape, offering valuable perspectives for educators, administrators, and students alike. The episode kicks off with a thoughtful discussion on helping students embrace a long-term view of learning in an era where AI tools make short-term solutions readily available. Craig and Rob tackle the challenges of detecting AI-assisted cheating and propose innovative approaches to course design and assessment. They emphasize the importance of aligning learning objectives with real-world skills and knowledge retention, rather than focusing solely on grades or easily automated tasks. At the end of it all, they wonder if it's time to rethink grading. Next, the hosts examine recent developments in language models, highlighting the remarkable advancements in speed and capabilities available in Anthropic’s new model, Claude 3.5 Sonnet. They introduce listeners to new features like "artifacts" that enhance user experience and discuss the potential impacts on various academic disciplines, particularly in programming education and research methodologies. This segment offers a balanced view of the exciting possibilities and the ethical considerations surrounding these powerful tools. The final portion of the episode covers issues related to the complex world of copyright issues related to AI-generated content. Craig and Rob break down the ongoing debate around web scraping practices for AI training data and explore the potential legal and ethical implications for AI users in academic settings. They stress the importance of critical thinking when utilizing AI tools and provide practical advice for educators and students on responsible AI use. Throughout the episode, the hosts share personal insights, anecdotes from their teaching experiences, and references to current research and industry developments. They maintain a forward-thinking yet grounded approach, acknowledging the uncertainties in this rapidly evolving field while offering actionable strategies for navigating the AI revolution in higher education. This episode is essential listening for anyone involved in or interested in the future of education. It equips listeners with the knowledge and perspectives needed to adapt to and thrive in an AI-enhanced academic environment. Craig and Rob's engaging dialogue not only informs but also inspires listeners to actively participate in shaping the future of education in the age of AI. Whether you're a seasoned educator, a curious student, or an education technology enthusiast, this episode of AI Goes to College provides valuable insights and sparks important conversations about the intersection of AI and higher education. Mentioned in this episode: AI Goes to College Newsletter…
We're in an odd situation with AI. Many ethical students are afraid to use it and unethical students use it ... unethically. Rob and Craig discuss this dilemma and what we can do about it. They also cover the concept of AI friction and how Apple's recent moves will address this under appreciated barrier to AI use. Other topics include: Which AI chatbot is "best" at the moment Using AI to supplement you, not replace you Why you might be using AI wrong Active learning with AI, and more! --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter (https://aigoestocollege.substack.com/). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like us to cover? Email Craig at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/.…
In this episode of "AI Goes to College," Rob and Craig discuss the implications of OpenAI's GPT-4 Omni (GPT-4o) AI fatigue and hysteria, and why prompt design is better than prompt engineering. Craig and Rob explore the implications of GPT-4 Omni's enhanced capabilities, including faster processing, larger context windows, improved voice capabilities, and an expanded feature set available to all users for free. They emphasize the importance of exploring and experimenting with these new technologies, highlighting the transition from prompt engineering to prompt design for a more user-friendly approach. They discuss how prompt design allows for a more iterative and creative process, stressing the need for stakeholders to adapt and incorporate generative AI tools effectively, both in teaching and administrative roles within higher education. Through their conversation, Rob and Craig address the hype and hysteria surrounding generative AI, encouraging listeners to approach these tools with curiosity and a willingness to adapt. They advocate for a balanced perspective, acknowledging both the benefits and risks associated with integrating AI technologies in educational settings. Rob suggests creating a prompt library to capture successful prompts and outputs, facilitating efficiency and consistency in utilizing generative AI tools for various tasks. They also emphasize the importance of listening to stakeholders and gathering feedback to inform effective implementation strategies. Rob and Craig conclude the episode by underscoring the value of continuous exploration, experimentation, and playfulness with new technologies, encouraging listeners to share their experiences and creativity in utilizing generative AI effectively. To stay updated on the latest trends in generative AI and its impact on higher education, listeners are invited to subscribe to the "AI Goes to College" newsletter and watch informative videos on the AI Goes TO College YouTube channel. The hosts invite feedback and suggestions for future episodes, fostering a dynamic and interactive community interested in leveraging AI technologies for educational innovation. Overall, this episode provides valuable insights into navigating the evolving landscape of generative AI in higher education, empowering educators and administrators to adopt a proactive and adaptable approach towards leveraging AI tools for enhanced teaching and administrative practices. --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( https://aigoestocollege.substack.com/ ). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like us to cover? Email Craig at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/ .…
In this episode, Craig discusses: My vision of the future of generative AI Harpa - a great AI Chrome extension Using Claude to examine an exam Should higher ed fear AI? The highlights of this newsletter are available as a podcast, which is also called AI Goes to College. You can subscribe to the newsletter and the podcast at https://www.aigoestocollege.com/. The newsletter is also available on Substack: (https://aigoestocollege.substack.com/).…
On Tuesday, April 30 at 5 P.M. Eastern time, I’ll be giving a talk on the ethics of human-AI co-production. This is part of an annual series called the Marbury Ethics Lectures. I’m quite honored to be the speaker; two years ago, the speaker was then Louisiana Governor John Bell Edwards. Anyone in the area is welcome to attend in-person, but the event will also be live streamed: https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d I’d love it if you could join us! -- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( https://aigoestocollege.substack.com /). Both are available at https://www.aigoestocollege.com/ . Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/ . Mentioned in this episode: AI Goes to College Newsletter…
In this episode of AI Goes to College Craig dives deep into the world of AI in education, exploring new tools and models that could revolutionize the way we approach learning and teaching. Join Craig as he shares insights from testing various AI models and introduces a groundbreaking tool called The Curricula. In this episode, Craig talks about: A terrible new anti-AI detection "tool" Does AI hurt critical thinking and academic performance? How not to talk about AI in education Claude 3 takes the lead Using Google Docs with Gemini Claude 3 Haiku - Best combination of speed and performance? The Curricula - A glimpse of what AI can be Anti-AI detection tool There's a terrible new tool that supposedly helps students get around AI detection systems (which don't work well, by the way). Faculty, you have nothing to worry about here. The tool is a joke. Does AI hurt critical thinking and academic performance? A recent article seems to provide evidence that AI is harmful to critical thinking and academic performance. But, as is often the case, online commenters get it wrong. The paper doesn't show this at all. How not to talk about AI in education An author affiliated with the London School of Economics wrote an interesting article about how NOT to talk about AI in education. Craig comments on what the article got wrong (in his view). Using Google Docs with Gemini There are some interesting integrations between some Google tools, including Docs and Gemini. It works ... OK, but it's a good start. Claude 3 Haiku If you haven't checked Claude 3 Haiku, you should. It may offer the best performance to speed combination in the market. The Curricula The Curricula is an amazing new tool that creates comprehensive learning guides for virtually any topic. Check it out at https://www.thecurricula.com/ . Listen to the full episode for the details. To see screenshots and more, check out Issue #6 of the AI Goes to College newsletter at https://aigoestocollege.substack.com/ -- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( https://aigoestocollege.substack.com/ ). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/ .…
In this week's episode of AI Goes to College, Craig covers a range of topics related to generative AI and its impact on higher education. Here are the key highlights from the episode: Importance of Human Review: Craig share a humorous yet enlightening experience with generative AI that emphasizes the crucial role of human review in ensuring the appropriateness and accuracy of AI-generated content. New Features for Chat GPT Teams: The latest developments in chat GPT teams, including improved chat sharing, GPT store functionality, and image generation options, offer exciting possibilities for collaborative AI use. Slide Speak: Craig explores Slide Speak, a promising tool for quickly creating slide decks from documents using AI. While it's not yet perfect, it shows great potential for streamlining the presentation preparation process. Now, here are the key takeaways for you: 1️⃣ Human Review is Crucial: Always ensure that AI-generated content goes through human review, especially for important and public-facing materials. 2️⃣ Collaborative AI: New features in chat GPT teams foster better collaboration and creativity in AI-powered conversations and content creation. 3️⃣ Streamlining Presentations: Tools like Slide Speak show promise for simplifying and expediting the process of creating slide decks, though they may need some manual adjustments for perfection. Tune in to the full episode for more insights and the latest developments in generative AI! And don't forget to subscribe to the AI Goes to College newsletter for detailed insights and practical tips. Let's keep embracing the future of AI in higher education together! --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( https://aigoestocollege.substack.com/ ). Both are available at https://www.aigoestocollege.com/ . Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com. You can also leave a comment at https://www.aigoestocollege.com/.…
欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。