Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Microsoft Launches Free AI Assistant For All Educators in US in Deal With Khan Academy

Microsoft is partnering with tutoring organization Khan Academy to provide a generative AI assistant to all teachers in the U.S. for free. From a report: Khanmigo for Teachers, which helps teachers prepare lessons for class, is free to all educators in the U.S. as of Tuesday. The program can help create lessons, analyze student performance, plan assignments, and provide teachers with opportunities to enhance their own learning. "Unlike most things in technology and education in the past where this is a 'nice-to-have,' this is a 'must-have' for a lot of teachers," Sal Khan, founder and CEO of Khan Academy, said in a CNBC "Squawk Box" interview last Friday ahead of the deal. Khan Academy has roughly 170 million registered users in over 50 languages around the world, and while its videos are best known, its interactive exercise platform was one which Microsoft-funded artificial intelligence company OpenAI's top executives, Sam Altman and Greg Brockman, zeroed in on early when they were looking for a partner to pilot GPT with that offered socially positive use cases.

Read more of this story at Slashdot.

Study: Alphabetical Order of Surnames May Affect Grading

AmiMoJo writes: Knowing your ABCs is essential to academic success, but having a last name starting with A, B or C might also help make the grade. An analysis by University of Michigan researchers of more than 30 million grading records from U-M finds students with alphabetically lower-ranked names receive lower grades. This is due to sequential grading biases and the default order of students' submissions in Canvas -- the most widely used online learning management system -- which is based on alphabetical rank of their surnames. What's more, the researchers found, those alphabetically disadvantaged students receive comments that are notably more negative and less polite, and exhibit lower grading quality measured by post-grade complaints from students.

Read more of this story at Slashdot.

Students Are Likely Writing Millions of Papers With AI

Amanda Hoover reports via Wired: Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

Read more of this story at Slashdot.

Harvard Reinstates Standardized Testing Requirement

Harvard College is reinstating the requirement for standardized testing, reversing course on a pandemic-era policy that made them optional. It follows similar moves from elite universities like Yale, Dartmouth, and MIT. Axios reports: At Harvard, the mandate will be in place for students applying to begin school in fall 2025. Harvard had previously committed to a test-optional policy for applicants through the class of 2030, which would have started in fall 2026. Most students who applied since the pandemic began have submitted test scores despite the test-optional policy, the university said. Reviewing SAT/ACT scores as part of a student's application packet helps an admissions decision be holistic, the university said in a statement. "Standardized tests are a means for all students, regardless of their background and life experience, to provide information that is predictive of success in college and beyond," Hopi Hoekstra, a Harvard dean, said in the statement. "Indeed, when students have the option of not submitting their test scores, they may choose to withhold information that, when interpreted by the admissions committee in the context of the local norms of their school, could have potentially helped their application."

Read more of this story at Slashdot.

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership

theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course." Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million). In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.

Read more of this story at Slashdot.

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns?

A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.") And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023. Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products. But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money. A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.") But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..." Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."

Read more of this story at Slashdot.

AI's Impact on CS Education Likened to Calculator's Impact on Math Education

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment." Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.

Read more of this story at Slashdot.

Chronic Student Absenteeism Soars Across US

The US has seen a significant increase in student absenteeism since the pandemic closed schools four years ago, with an estimated 26% of public school students considered chronically absent in the last school year, up from 15% before the pandemic, according to data from 40 states and Washington, D.C. A report adds: The increases have occurred in districts big and small, and across income and race. For districts in wealthier areas, chronic absenteeism rates have about doubled, to 19 percent in the 2022-23 school year from 10 percent before the pandemic, a New York Times analysis of the data found. Poor communities, which started with elevated rates of student absenteeism, are facing an even bigger crisis: Around 32 percent of students in the poorest districts were chronically absent in the 2022-23 school year, up from 19 percent before the pandemic. Even districts that reopened quickly during the pandemic, in fall 2020, have seen vast increases.

Read more of this story at Slashdot.

❌