ECEC Conversations | Session 4

AI Ready in ECEC—Mindset, Ethics, Access

Watch Preview Video

AI Ready in ECEC—Mindset, Ethics, Access

AI is increasingly becoming part of our world, and that includes the early childhood education and care sector. Admins, educators and staff need to consider a shift in our thinking, training and support systems to responsibly and equitably embrace AI tools. Sarah Louise and Hayden shared their insights on ways to approach AI in ECEC, particularly from an ethical, while sharing ways to implement AI in the sector that benefit both ECEC staff and children

Key topics covered

Who should watch AI in Early Childhood Education—Mindset, Ethics & Access?

This webinar is essential viewing for early childhood professionals navigating the emerging landscape of AI in educational settings. It’s perfect for

  • ECEC Educators and Teachers—Learn practical strategies for building AI literacy, discover accessible tools and understand how to implement AI ethically while maintaining creativity and critical thinking in your practice.
  • Educational Leaders and Service Administrators—Gain frameworks for developing AI readiness checklists, supporting staff through technology adoption, and ensuring child safety and data privacy remain at the forefront of any AI implementation.
  • Pre-service Teachers and Professional Development Coordinators—Understand the foundational knowledge needed to become AI literate, explore equitable approaches to professional learning, and learn how to differentiate between appropriate and problematic uses of generative AI in early childhood contexts.

Sarah Louise Nelson

Founder, Sarah Louise Consultancy

LinkedIn

Sarah Louise is a career early childhood teacher, pedagogical leader, and lifelong learner. She has dedicated her career to the pursuit of kindness in the workplace and to developing leaders at all levels of organisations. Through Sarah Louise Consultancy, she thrives on the challenge of working with highly diverse groups of early childhood professionals across the sector spectrum. With formal qualifications in education, research and leadership, Sarah Louise is a committed advocate for children, social justice, and the early years workforce.

Dr Hayden Park

Lecturer, Melbourne Polytechnic

LinkedIn

Dr Hayden Park is a Lecturer in Education whose research interests centre around STEM & science education and the use of technology in learning, particularly Artificial Intelligence (AI) and Extended Reality (XR) technologies. Dr Park also maintains a keen interest in all aspects of behaviour guidance within educational settings. His PhD focused on the use of virtual reality to help pre-service teachers learn about School-Wide Positive Behaviour Support (SWPBS), while his current research relates to the integration of generative AI into educational contexts. 

Tash Veiman

Senior Training Specialist – Xplor Education

LinkedIn

Before joining Xplor Education, Tash held significant positions in childcare settings, including centre manager and educational leader, assistant centre manager and educational leader, and safety and compliance officer. She is able to leverage this extensive experience in the early childhood care and education sector to deliver exceptional support to childcare services as a training specialist at Xplor Education.

Her deep understanding of the challenges and demands faced by childcare workers testifies to her ability to provide practical, effective and tailored support. She is dedicated to empowering childcare providers and helping them succeed in their roles by equipping them with the knowledge, training and support they need to thrive.

Watch A Past Session

[00:00:00] Tash: Hello everyone, and welcome to AI Ready in ECEC, Mindset, Ethics and Access. My name is Tash, and I am a senior training specialist with Xplor Education. I also come from 10 years ECEC sector experience. So, a little bit of an understanding of the topic that we're gonna be talking about today. Joining us today.

[00:00:25] Is Dr Hayden Park. He's a lecturer in education at Melbourne Polytechnic. Dr Park's research explores the future of learning with a strong focus on stem and science education and the use of emerging technologies, including artificial intelligence and extended reality. His PhD investigated virtual reality as a tool to support pre-service teachers in learning about school-wide positive behaviour support, and his current work continues to examine how generative AI can be effectively integrated into educational settings.

[00:01:03] So big thank you to Hayden for joining us today.

[00:01:06] Hayden: No worries. Hello everyone.

[00:01:09] Tash: Perfect. We are also joined by Sarah Louise Nelson, founder of Sarah Louise Consultancy. This is her second panel with us, so we are so thankful. Sarah is a dedicated early childhood teacher, pedagogical leader, and lifelong learner.

[00:01:25] She's built her career fostering kindness in the workplace and empowering leaders at every level. She brings a wealth of experience working with diverse early childhood professionals across the sector and is passionate in advocation for children, social justice, and the early years workforce. We are thrilled to have you back with us today, Sarah.

[00:01:46] Thank you for joining us.

[00:01:48] Sarah Louise: Thanks so much.

[00:01:49] Tash: Excellent. Let's jump in because this is such a prevalent topic in the ECEC field at the moment. And we're really excited to hear your insights. We'll start with you, Hayden. What is one way generative AI has positively surprised you in the ECEC field this year?

[00:02:08] Hayden: Oh, it does keep surprising to say the least about generative AI. On a positive note, I guess it's events like this, that the growing recognition of the importance of AI literacy and the general want to grow in those spaces as educators. From a pure technological front. I'm gonna roll with the audio-based generative AI.

[00:02:38] So these are the song generators. I'm not sure if anyone's heard of Suno or Udio, but hearing reports back of some of our students, seeing them and using them on placement. There is some possibilities in those spaces and they are pretty exciting. So this is technology that, with just a couple of words and the right prompt, you can generate a song on anything.

[00:03:07] So we're seeing educators go out. Make personalised songs for students that educate them on things in a really fun and personalised way. So in terms of positive energy, I think those two spaces, the generative AI and just the growing uptake for and recognition of the importance of AI literacy, pretty positive.

[00:03:33] Tash: I love this. I think it's a very exciting topic, and it's very exciting what we'll have access to in the future. Over to you, Sarah. Do you share the same views that Hayden has, or do you see some more opportunities or risks that we need to focus on before we really bring this into the ECEC sector?

[00:03:51] Sarah Louise: I think Hayden really nailed it there when he mentioned the idea of AI literacy. Because what I'm noticing from a practitioner perspective, working in services, supporting teams, the sector is very quick to jump on a new idea. I think that's really fantastic. I think even though we are sitting in a space of real change fatigue as a sector, we are still able to embrace new ideas, new technologies, new emerging concepts and things like that.

[00:04:22] So we've latched on, as a sector, we've taken this idea of AI, and we've rolled with it. Unfortunately, most of the sector is around my age, and we didn't grow up with AI. We didn't have these tools even 2, 3, 4 years ago at our disposal in early childhood. The tools, I guess, were around, but not specifically for us.

[00:04:42] Now that we have them, we run the risk of going into something almost blind, like we're working it out as we go. So I do see there being significant risks to children, to privacy, to child safety if we're not careful about how we actually engage in this idea of AI literacy and upskill ourselves first before we introduce it into our play spaces.

[00:05:08] Tash: I love this. I definitely agree. I think there…this sector itself is so keen to adopt new ways and really explore that, which is so exciting and progressive. But when we are talking about child safety and child data, there are definitely a lot of things to consider in there as well.

[00:05:26] Hayden, you've worked directly with educators learning this new tech. What's the most effective way to spark curiosity, in your opinion?

[00:05:36] Hayden: Sparking curiosity. A lot of parallels, I reckon, with how we do it in our students. So bringing in that sense of wonder, bringing in that wow factor. I guess the good thing about generative AI is there's a lot of that.

[00:05:51] It does knock your socks off with some of its capabilities. So I think first and foremost, just demonstrating what it can do. Which just to link back to what Sarah was saying, that's like a core dimension of AI literacy. So showing what it's capable of. Like I was talking before about Udio and Suno, the AI generators for songs.

[00:06:15] First time I heard them, they were mind blowing in what they did. So I used them to …as a recap for a lesson where we pretty much just typed in the things we'd been learning about. I think we decided on psychedelic K-pop as the output, and it created just a staggeringly good, in four, five seconds, a song that recapped our lesson.

[00:06:42] And I think that's a really important part of this space, just generating an awareness of the capabilities and showing educators what that results in, in terms of possibilities in the classroom. So seeing it applied and seeing it applied well. So I know everyone's a little bit different in what sparks their curiosity, but for mine, seeing really good examples of the tech rolled out well in a way that improves learning, that kind of gets my sense of curiosity going.

[00:07:14] Tash: I love this. If we dive a little bit deeper just for a second, Hayden, we are really focusing on that, that audio element of AI. Have you found in your experience that you would find students would learn and adapt that learning better in that auditory way of, maybe having a song where they can remember the song, and they're really adapting that learning?

[00:07:37] Hayden: Yeah. So I think what it's really good for is just another way of presenting the content. I know, just coming from a little bit of a primary background, like thinking of it through differentiation and being multimodal in how we present it. The audio element usually gets teacher voice oriented, and that's about it.

[00:08:00] What this does is it just gives it a new novel layer to a way of presenting information. And the great thing about these AI generators, the audio ones, is that you don't need to be able to read or write to use them, which is pretty handy for some of our younger students.

[00:08:22] Tash: I love this. Thank you so much for that clarification. Sarah, how do you support staff who may be resistant or even really anxious about this change in progression in the sector?

[00:08:34] Sarah Louise: It's a good question because I haven't necessarily come up to anybody who is feeling that sense of anxiety about it. Those that are using it are feeling pretty confident are wanting to engage.

[00:08:47] I think there's almost a difference here. There's the people that are using it and that are confident in it, and then there's the people that aren't using it at all. And it's that lack of using it in any aspect of their life. I've got friends that use chat, GPT, like it's Google. We can debate whether that's appropriate or not at another time. But for the educators that aren't yet using it, it's more just that we just don't have time. We just don't have time to be experimenting. I didn't even know about these great audio ones because the amount of time in the day just doesn't exist. And so my approach has been when educators are asking, teams wanna work with me on what does this look like in spaces, it's firstly about just understanding what's out there. What are these tools, how are they currently being used? And then reflecting on where these things are already happening. And I always use the example of, if you have the updated Word program, you've probably got Copilot on there. And that is an AI tool. And when we start to actually break it down, they're like, ‘Oh, actually, it's when I use this app, there's an AI tool. When I do this thing at work, there's an AI tool.’ It's actually in so much of what we do, and it almost takes the mystery out of it. And then there's a deeper conversation about how we actually engage with the generative functions of AI.

[00:10:01] How we ask the question. What kind of questions we are asking. Keeping children's privacy and safety and all of those things in mind. But I guess it's working in a way that's very reflective, like we always do, like we have always done with everything that's ever come up. We start from that space of question and wonder, curiosity, what does this mean? How is it applied in our own lives? What can we do in the spaces?

[00:10:26] Hayden: Yep. Beautifully said Sarah. That idea that you need to look for why they're anxious or why they're resistant to the new tech, really critical. I think back to the start of when this first came out, and as Sarah's saying, so much of it's just related to, 'I dunno what it is, I don't have the time to engage with another new tech.'

[00:10:50] But there's also a lot of other newer reasons. Like there's this term 'conscientious objector.' And so this is the idea that AI pretty crook for the environment at the moment. It's pretty, you've got yourself on some pretty solid ground if you reject the notion of using it from a sustainability perspective.

[00:11:09] So you'd approach those two people really differently for helping them see the significance of building their own AI literacy. And I think one of the things that we've really hit upon in the pre-service teacher education space is you're gonna need to teach your students to be AI literate. And if you're not AI literate yourself, you can't do that.

[00:11:36] Tash: I love this. I love this perspective. I definitely think there are two sides in the sector at the moment, which is those that are really jumping in and taking charge and really exploring AI as a tool and a resource that they can work with and alongside.

[00:11:52] And then you've got that other side that is very anxious and worried and not necessarily wanting to use AI because 'Why are we trying to fix something that's not necessarily broken?' And I think they're both really valid. I do think that we're a very progressive sector. And AI isn't gonna be going anywhere, and I think it can be a really helpful tool. So having those resources to be able to explore it further and deeper understand is gonna be really key, in driving change.

[00:12:22] Now, before we jump into our next questions if anyone watching does have questions for Hayden or Sarah or both, or you're just really… there's a question around AI that you really wanna pose to these two professionals, please pop them in the Q & A. We will definitely have time to ask those. So there is no such thing as a silly question. Please be sharing it.

[00:12:43] But if we dive into our next question… Sarah, how do you ensure training is truly equitable across providers and what have you seen that works?

[00:12:54] Sarah Louise: I think this is actually where AI's become a fabulous tool for us, is in the professional learning space.

[00:13:00] Because what it's done is it's created more equitable access for everybody. We've moved away, I think as a sector, and I think the pandemic kind of helped with this push is probably the only benefit that came out of that time, but it's pushed us to a space of differentiating learning for adults in the professional learning space.

[00:13:19] So whereas before, so much of our learning was done after hours, a two-hour workshop with a presenter, we've got costs involved. If you're a young parent, that's problematic. If you live far away, if you take public transport, there are so many barriers to that. What we've now got is a time where professional learning can be undertaken anywhere. You can be on the train doing that, listening to a podcast or reading an article or whatever it is. You can do the after-hours workshop, you can do a full day PD, you can do something online. There are so many options, and I think the use of AI has made that a little bit easier for us. Because even if I think about some of my pre-service teachers that I'm working with, we can take an article and turn it into a podcast, so that auditory learners can experience all the same information of that article that… an academic article can be a bit tough to read if that's not your style. Or you just, you need a bit of a break. But we've now got these tools where we can go, actually, let's listen to this article as a dialogue. And suddenly we understand the information a little bit easier. It's easier to apply it then to our work. So we've using these tools to create that sense of equity and create more opportunities for access to learning.

[00:14:35] Tash: I love this.

[00:14:36] Hayden: It's great. Just on that tool that Sarah's referring to, that's called NotebookLM developed by Google. And like Sarah said, it is the divide closer. It takes wordy, impenetrable, scholarly articles, turns 'em into podcasts. And the podcast can be interactive, so you can ask questions, mid presenter and they will respond. It's a bit freaky.

[00:15:04] Tash: This is great though because, exactly as you've both covered, it's really opening up learning for every learning type. It's accessible to everyone, which is amazing.

[00:15:15] Hayden, have you seen tech rollouts exacerbate or help close the digital divide? And can you share an example?

[00:15:25] Hayden: Yep, I think, so… tech rollouts probably very roughly follow the trajectory in the short term of really opening up the digital divide. But in the longer term often they close it. So yes, it's tech dependent. But if we look at some of the examples like mobile phones early on. Not many people had mobile phones. Now everybody's got 'em. From an educational perspective, I look at Scratch and coding and early on we had Code Clubs. This was the domain of the private schools.

[00:16:04] And we saw this is very much a dimension where it was your strong students that were getting a leg up in that space. But then you move four or five years down the track, and it turns into Scratch. And I'm not sure if anyone's familiar with Scratch or Scratch Junior the coding applications for schools, but they are extremely accessible.

[00:16:31] They are something that brings coding and its possibilities through block coding to pretty much anyone. I think you can start at the age of three with the block coding dimensions. And I think, yes, there's lots of examples where we have tech early on really widening that digital divide. But then as it becomes more ubiquitous, as it becomes more widespread, we see the uptake grow and the benefits dispersed a little bit more widely.

[00:17:00] And the good thing about generative AI is it is relatively accessible with the financial model that ChatGPT rolled out. Made it if you have a digital device, you can use it for free, which is a relative rarity. For a lot of the tech past. So yeah both. But in the long term I'd imagine that some of the developments, especially around AI tutoring systems, where we're seeing our students who have potentially fallen through the cracks, given the opportunity to target their point of need through an AI tutor.

[00:17:38] A lot of possibilities in that space for closing the digital divide. And for closing, there's just the learning divide that has been pretty persistent in our society for a long time.

[00:17:50] Tash: Such a great perspective. Such great examples. I also love that you're sharing some really accessible resources with people who are joining us today, but also people who will review this after. Because again as Sarah brought up, I think the limited time that people have and their understanding and access to resources really just becomes overwhelming. And in, instead of, having that curious mindset they'll be able to watch today's session and maybe take a look and dive in for the first time to AI and really start exploring it in a positive mindset.

[00:18:23] Same with you, Hayden. What do you think is the biggest ethical concern that we would have with AI in the classroom?

[00:18:32] Hayden: Oh there's a long list. Which again, if you're looking at like a core element of AI literacy, it's just knowing what its problems are. Knowing its weaknesses, knowing its limitations, knowing its ethical concerns.

[00:18:47] So regarding the classroom specifically, like I'm tempted to say data hoovering, 'cause it gobbles data and the companies do say, 'Yes, this data that it gobbles for this particular classroom through this particular model, we're not using it.' And yeah, they may not use it, but that doesn't mean it ain't getting pinched.

[00:19:11] So there's not all good actors out there on the internet, and you may think your data's secure, but as we know, pretty hard to secure something impenetrably. So data hoovering, getting all of that personal data, all of the things that happen within a classroom listened to and then consumed and put on the internet or turned into another learning model that's a pretty significant ethical concern. But I think longer term, the bigger issue is around not having students develop appropriate AI literacy. So this is if our students don't understand the limitations of this technology. If they blindly trust it. If they outsource thinking to it. If they treat it as the gospel of information and learning, long term the implications of that are, they're huge.

[00:20:17] That's an undermining of the education system. The flow on effect from this is, we as educators, again, need to be AI literate in order to be able to properly educate our own students in this space. So I think, yep, a bit of a multi-prong response to that one, but, yep. Making sure that our students know the limitations of this tech and use it appropriately, and that the human or even expert in the loop in these systems… critical.

[00:20:55] Tash: A hundred percent. I think you've touched on something that we hear quite a lot here at Xplor Education, but I'm sure it's quite a sentiment out in the ECEC sector as well around being literate, AI literate and the question that we get quite often is, how do we stop our educators or our children from becoming dependent on AI as a resource and therefore numbing their curiosity, their creativity, their thought process. We hear it quite a lot. You don't have to have an answer to that. It might be something that, that we think on and come back to. But yeah, if you have some insights into that and…

[00:21:33] Hayden: Right. I'll let Sarah have that a question. I'll let Sarah take that one first and then I'll dive in because I've definitely got some thoughts.

[00:21:41] Sarah Louise: It's certainly been something that's on my mind for a while. And I think now myself being in the higher education space and working with pre-service teachers, and, I finished my degree in 2014, so it wasn't too long ago, but it was long enough ago that, my portfolios were handwritten and all my resources that I created were handmade.

[00:22:01] And I was working with a group of students recently about creating picture books. And for me, that's something I do with my own children. It's something I do in the classroom. It's not an unheard-of thought. But the students, were very much, 'Oh, can we create the images for the book through AI? Can we write the story using ai?' And I thought, this unfortunately is for me, one of the biggest ethical problems that we have with the use of generative AI in early childhood is that I love that you said outsource thinking, Hayden, because that's what it's doing. The amount of Playschool episodes I've consumed in my life the books, the reading, I had music books on my shelf.

[00:22:39] If you wanted me to make up a song about something right now, I could make you up a song about something right now, because I've got that in me as part of my learning and how I've grown into being the early childhood teacher I am today. And so for me, the ethical risk here is that we are in a space of, 'I don't know how to do it, so I'll just get AI to do it.' Rather than, 'I don't know how to do it, so I'm gonna come up with a way of finding out, or I'm gonna ask someone more senior or more experienced to share that with me.' That's probably my biggest thing there. I'll pass back to you, Hayden, 'cause I know you do have some thoughts on this as well.

[00:23:13] Hayden: Oh they're pretty aligned with that. So when we talk about AI literacy, a bit of an umbrella term and I've thrown it around a lot so I just might break it down a little bit 'cause it is the backbone of how we respond to so many of these things.

[00:23:28] So we're talking about firstly, the technological proficiency. So are you aware of what these tools can do? Are you aware of their strengths, their functions, and which ones we might use for certain situations? And that's a pretty significant body of knowledge there. But really important, if you're using the wrong AI tool for the wrong job, ain't gonna work.

[00:23:56] Another dimension is the pedagogical, so knowing how to teach with them. Another dimension is the professional, so knowing how to use it in your own industry correctly or helpfully or appropriately in a way that helps you develop your capabilities and as an educator long term. But for me, the most important dimension of AI literacy is knowing the limitations and the ethical concerns. And this is something that the students, they see it does this thing for them. They see that positive, and it is very hard to capture and feel the notion that this is harmful in the long term because it's preventing an educational opportunity. And this is where being good educators, teaching the implications of reflexive AI use to our students has to be an active thing of, 'If you do not actively develop this skillset, you will not have it. You will not be able to be the expert in the loop. You'll be dependent on AI and that puts yourself in a really vulnerable position.'

[00:25:13] So this is where the sort of the fifth dimension of AI literacy comes in, teaching it to your students. So knowing the significance of how the knowledge surrounding passing this on, effectively, including all of its baggage and shortcomings, probably most importantly, all of its baggage and shortcomings, just lies at the centre of AI literacy as an educator.

[00:25:45] Tash: Really great insights there and I'm so glad we took the time to dive in a little bit deeper to unpack that. Sarah, as a leader, with everything that we've just talked about, how would you work with a team to uphold the values and ensure ethical practice with AI as a new tool?

[00:26:03] Sarah Louise: Yeah, it's a good question. I, and I sit on the fence about even just using AI in general in early childhood spaces because of some of the ethical dilemmas that we face. But when I'm working with my team, when we're introducing new ideas, we're starting from a place of, 'Well, we're gonna do it with ourselves first.'

[00:26:23] We're gonna use anything that's kind of educator facing. And a recent example was someone suggested in a team meeting that, our policies our parents aren't engaging in policy work. They don't wanna review them, they don't have time. Let's be real. Some of our policies are 5, 6, 7 pages long.

[00:26:40] What if we used an AI tool to summarise the key points? What do families really need to know about these policies? What are the key changes? What are the key implications for them? And would families engage better if we had that little summary document? And so rather than going straight into thinking about children's learning, how we're using it with children, which all felt a little bit too 'We haven't really talked about the implications of child safety yet. Let's do the adult facing work first.' And we did give that a try on one of our policies. It didn't really work because the tool wasn't, or it might've been the way we used it, but the tool wasn't giving us the response that we were hoping for. So it's back to the drawing board. It's trying again. Maybe we'll use a different tool. We'll try different prompts. And we'll just keep experimenting. But I think it's that, when we are wanting to work in an ethical way, we need to be aware of the ethical dilemmas that surround whatever it is we're doing, whether it's AI or anything else. And then work through those in a really systematic way because it's when we move too fast, when we just go, 'Oh, child safety. I've thought about it. Sustainability, I've thought about it. Let's move on.' Now actually, do we know how much, water consumption's being used by AI tools, what the power, the energy is happening and there's so much, I don't even have the words to describe what's happening.

[00:28:00] Hayden will be much better placed for that. When we talk about ethics, that's what we're talking about. We're not just talking about here and now in this very service. We're talking broadly in our communities, we're talking globally about the impacts of climate change and all of that.

[00:28:14] So I think, yeah, working small, thinking through every single step before acting and trying things in a way where children won't be impacted first. I think is the approach that I would be taking with my teams.

[00:28:29] Hayden: Yeah, really important that the manageable steps and making sure that you are involved in the process and just, I think broadly this space they refer to as human in the loop, or even better expert in the loop.

[00:28:46] And this is the idea that whatever you prompt AI to come out with, if you don't know whether it's right or wrong, you need to go there first. You need to establish whether or not you would be able to judge effectively if the output is correct or incorrect or helpful, or not helpful. And if you can't do that, then you're not using it ethically. You're not using it appropriately. You are putting yourself and whatever output and the consequences of it putting yourself at risk essentially there.

[00:29:20] Tash: Some really great insights and I really hope that this drives discussion in services to talk about, 'Okay, where do we get started? And how can we really research what we're doing and become more expertise around AI.' I guess unpacking that stigma of it taking our jobs or doing our work for us, that's not how it should be used. And definitely it has been used in that respect, and it shouldn't be. So I love that we're, we are really bringing light on the positives, but also the considerations and hopefully sparking some conversations.

[00:29:55] Hayden: And I think just on what's, I think…

[00:29:57] Sarah Louise: No, go ahead. Go.

[00:29:57] Hayden: No, you go.

[00:29:59] Sarah Louise: I was just gonna say, shameful plug, but ECA have an ethical decision-making tool that services can use that aligns with the Code of Ethics. And this would be a perfect opportunity to get that tool out and to say, 'We want to integrate AI into our spaces. We want to think about all the different ways it can be used.' And actually working through that decision making tool to say, 'What are the implications?' And when we get stuck, that's where we need to do the work so that you can actually show that ongoing process as well.

[00:30:31] Hayden: Yeah, great. I was just gonna mention before you were referencing don't roll it out in a student or a child context before you've tested it yourself.

[00:30:42] You've got so many opportunities as you alluded to, Sarah, to test it with your staff to help with planning, to help with resource development. Just the things that are really low stakes before you put it in a little bit more of a high stakes context. So just wanted to give that a big shout out as test it. Test it on your staff first in low stakes contexts before you throw it into the classroom.

[00:31:09] Sarah Louise: And I think this is such a benefit of starting this learning in early childhood because we don't necessarily have the same requirements for outcomes as potentially there are in the school-based settings.

[00:31:23] Children, hopefully under the age of five, aren't accessing these tools without an adult present. I would hope that in primary school years as well, but, life happens. So we actually we have a chance to play with it when play is so fundamental to the work we do in early years, we can use AI as part of our play as adults to learn and to grow and to be curious together.

[00:31:48] Tash: This is really perfect, and it's definitely sparked some conversation. We do have some questions coming through. Michelle has asked if a link can be shared for the ECA tool that Sarah has referenced. We absolutely can share that link, 100%. I think it's a great resource to be using for anything in the ECEC field. But absolutely, if we are looking at AI, it's a great resource to be working through. We may even gather some of Hayden's, he's brought a lot of Ai tools and mentioned a few, so we might gather a few of those as well as a resource if people wanna start exploring as well, if you are comfortable to share those after as well, Hayden. Perfect. We love this. Okay, Sarah, in your experience, what's the hardest part about making tech accessible in early years services?

[00:32:37] Sarah Louise: You're asking that at a very interesting time. And had you asked me that six months ago, the answer would have been very different.

[00:32:44] We're in a time where we're removing digital devices from classrooms. We are thinking very critically about who has access and what that means and how we continue to do our work in the way we've been doing it. And for some services, this has had more of an impact than others. For some, tech wasn't necessarily part of the everyday anyway.

[00:33:03] And so that's a very limited, negative experience for them. And for other services, it was incredibly integrated, and big decisions have had to be made. When I think about how we assure access, I mean for me, I keep coming back to, and I'm a unionist, I keep coming back to that idea that everybody needs time.

[00:33:22] If we're expecting these things of people, if we want people to be exploring tools, reflecting, understanding how we can use something to better our program. We actually need to provide the time for that. And sometimes that's beyond the bare minimum. It's not just the person planning the program that needs the two hours to do the programming, and we can argue about two hours being not enough.

[00:33:45] But, we also need to make sure that the new employee who is not responsible for planning also has an opportunity. We need to make sure that the person who's still studying their Cert III also has an opportunity. And so part of ensuring that we have access, and I'm thinking very much about time access is that we actually create that time in our rosters, in our terms. However, we plan our time for people, we need to make sure everybody's got some so that it is equitable. But also thinking about our colleagues in rural regional areas where access to technology might be sporadic because of internet issues.

[00:34:21] It might be that there's only one computer in the whole service that people can use. Finding ways around this. What other types of technology can we use? Does it always have to be a computer? Do, you know, smartphones… is that something that we can investigate now that a lot more services have smartphones?

[00:34:39] So that they've got those as service issued, devices? Can they be used for part of these trainings and these ongoing learnings? It is about being a little bit creative with what we've got. Acknowledging that we have staffing shortages, acknowledging that, we're in the midst of some very critical digital challenges particularly around child safety.

[00:34:59] So yeah, I think making sure everybody's got equitable time to explore these ideas and not expecting people to do this out of hours as well.

[00:35:09] Tash: Some really great insights. They're 100% aligned. Hayden, from a research perspective, do you have any insights into some emerging solutions that might help address any of the barriers that Sarah's brought up, or do you have a perspective that maybe differs that you'd like to share?

[00:35:27] Hayden: Just from a general point of view, so much of the purpose of using AI in the first place is to help free up educator time. So ironically, this thing that's taking the time to learn is gonna be the thing that hopefully, if used well and effectively, saves time. But yeah, in terms of dimensions that are gonna help the rollout…

[00:35:55] On your phone… large language models like the ChatGPT app. I've got Perplexity on my phone, which is like a science or a journal knowledge-based AI that's pretty definitive. And if I've got a question, it's a very handy touchstone for the first investigation of a concept or a space. So as large language models become more prescriptive in their use.

[00:36:28] So I know Khan Academy have just rolled out a lesson plan generator. And it's pretty good in that if you were to take just an off the shelf, 'Yep, this is the thing, this is the learning intention, these are the things that I wanna focus on. Here's the content.' Does a very solid job. And so from a research perspective, as these large language models become more prescriptive for certain contexts, or more, I guess a better term, is more personalised or relevant to specific contexts where we're gonna find barriers to access start to fall down a little bit.

[00:37:12] But just as a general rule, like some often, the barriers, especially in this space, might be there for a good reason. So we really have to consider, yep, this tech is here, but firstly is it appropriate to use? And a lot of the times it's just a hard no. Like we've got other things that are better suited to teach in this space.

[00:37:37] Tash: Excellent. Hayden, I think you've brought up a really key topic there as well. Just because AI is available, it doesn't mean that services are necessarily ready to adopt it. And we don't have to jump on and do it because right now it, it's so relevant. It's out there. It's such a topic that everyone's talking about.

[00:37:56] I don't think I've had a conversation in the last week that didn't involve something that someone saw on ChatGPT. It's new. It's exciting. It doesn't mean that services necessarily are ready for it, and that's okay. But if you do wanna start investigating and start exploring you've both brought such valuable insights in how everyone should start their AI journey.

[00:38:19] And that's not with children. That is with the educators, that's with learning, that's with assessing. So I love this. A question for you both. If you had to co-design an AI readiness checklist for a new centre or a service or a school, what's one must have from both of you? What is a non-negotiable? Must be on the checklist.

[00:38:45] Hayden: Off you go, Sarah.

[00:38:48] Sarah Louise: I was hoping you'd take that first. My non-negotiable is that the question of has child safety been considered in this implementation? Whether that's child safety from a child protection point of view, child safety from a privacy point of view, child safety from a children accessing digital technologies point of view.

[00:39:08] All of it's relevant and all of it needs to be thought about first. That would absolutely be my number one. If you can't answer that question, if you have not done the deep dive on what does AI do with children's data? I don't wanna talk to you about AI. I don't wanna talk about implementation because that's the work we need to do first. We have an ethical responsibility, we have a legal responsibility, we have a moral responsibility to children.

[00:39:36] Hayden: Yep. Agree. I would probably put on a little bit more of a catchall with I identify as AI literate, so I can't, this is smuggling in a few things there. But if you don't identify as iterate, stay in your lane, get outta there don't apply it. To get a little bit more fine grained, and this kind of speaks to what Sarah was talking about there, I would say as part of an AI readiness checklist, I have investigated and tested the specific application I'm going to use. So they're all different.

[00:40:20] You might think, ‘Yeah, I've explored ChatGPT, I understand AI, I understand how it's using data.’ Then you might go and use Claude, or you might go and use Copilot. Completely different. So I think understanding, investigating what specific applications do, how they're used, how they use data… really important. And then testing it yourself to make sure that it is actually the thing that you think it is and that you know how to appropriately use it.

[00:40:55] Tash: I think these are really valid. 100%. And hopefully people are doing this if they're not, how can we… what does the journey of AI look like from the start for a service? This is a question that we get asked so often. Where should people get started? What should be their first access point when we're talking about AI in the ECEC sector?

[00:41:20] Hayden: Sarah, you want to take that?

[00:41:21] Sarah Louise: No, you go first. I'll keep thinking.

[00:41:23] Hayden: I'd say, talks, like this. So just, general touchstones for AI literacy. So seeing what's out there, hearing what other educators are doing. And I think just broadly that relates to PD on AI literacy. 'Cause once you learn a little bit about this space, you start to see roadmaps for how it could be implemented, but also to see hurdles and speed bumps and why they're there. You can give them a little bit more credence and consideration.

[00:42:01] Sarah Louise: I agree. I was gonna say professional learning as well, and I think underscoring that it's not asking ChatGPT what you need to know about generative AI, because I'm sure it'll be very generous and tell you all the wonderful things that there is to know about using it.

[00:42:16] But I also think being very mindful and very careful about who is delivering that professional learning. The reality here is that I am a user of AI. I am not an AI specialist. That is not my area of research, which is leadership. Whereas someone like Hayden who is doing that work in the AI space, would have a lot more I don't wanna use the word 'correct,' but that stronger understanding of what's happening right now. How it's evolved, how we've gotten to this point, and what it's looking like into the future. Because I can guarantee you can Google AI training and there'll be hundreds. How much of it was written by AI rather than a person with that research expertise. So I think being a little bit discerning about where you are getting your information on your professional learning journey.

[00:43:04] Hayden: Having a nice mix of someone who's across what's under the hood and someone who understands the ins and outs of early childhood education. Nice combo.

[00:43:17] Tash: 100%. Okay. Looking ahead, what's one thing from both of you that you want this sector to really prioritise in the next 12 months?

[00:43:32] Sarah Louise: Slowing down. We're excitable. We're an excitable bunch in early childhood. And we want to, we wanna try things. And Hayden said before, AI is a tool that can be used really well to create more time for us and to reduce our workloads if we're using it properly. We need to slow down in order to do that.

[00:43:53] We can't just jump in without all the information, without all the, the strategies at hand. So I think over the next 12 months, I'd love people to just take a breath. AI's not going anywhere. We will continue to see it grow and evolve and just be patient with yourself and your team with how that's evolving.

[00:44:11] Hayden: Yep. Slowing down and building a good foundation. So I would say just having free access to some really solid material that surrounds some of those core things I was talking about on the AI literacy front. So how does it work? What are its strengths? What are its limitations? What are the ethical concerns?

[00:44:37] Ensuring that there's good accessible resources. Absolutely critical.

[00:44:47] Tash: Okay. The final question from me, and then we will hand over to attendees if they have any questions, please pop them into Q & A. What is one thing from both of you that you wish people would ask about AI but rarely do?

[00:45:05] Hayden: I'm happy to go first on this one. And this is, I think people are asking this, I think it's just not asked enough, and you can't ask it enough. It would just be simply, what if it's wrong? And just getting into the habit of every single prompt or output. What if it's wrong? How would I know? How do I check that?

[00:45:28] I think is a very important dimension to being AI literate, to being the expert in the room.

[00:45:39] Sarah Louise: Yeah. I'm gonna just, I'm gonna come off that and just go, 'Yeah, that.' You know it because it's an easy thing to do. It's an easy thing to just copy and paste and put it somewhere and just assume that it's got it right, but at the end of the day, it's not a human with that, human mind, that critical thinking. So asking the question, is this wrong? Is there another perspective? Does this actually align with what I'm genuinely wanting it to do? And then making an informed decision.

[00:46:10] Tash: I love this from both of you. Another question posed to you both. What's one thing that excites you about AI and its potential in this sector? Hayden, start with you. What's exciting? I know we started our conversation today with some things that, that you've used and tested. Yeah. What excites you about AI?

[00:46:32] Hayden: Oh yeah, there's some pretty good stuff out there. Probably the main thing is once you learn to use it effectively, the capacity to use it to expand your educational toolkit is enormous. So specifically, I think the thing that excites me most is once you're AI literate, you will become a better educator.

[00:46:57] When you use this effectively, you'll be able to differentiate better. You'll be able to personalise learning better. You'll be able to align all of your things, your learning activities, your resources with all of the theories and things that you prioritise in your educational philosophies. You'll directly become a better educator when you are able to use this appropriately and effectively.

[00:47:25] I think that's pretty good. It's a pretty helpful lens to check back in with and it's pretty exciting. So there's some incredible applications out there, but they're essentially only as good as your AI literacy allows them to be.

[00:47:43] Sarah Louise: Yeah. And I think a very similar flavour that once you've got that AI literacy, the opportunities for expansion of creativity in the classroom. If I think about, things like songs, stories, the ability to create a resource very quickly for young children. Once we know how to use the tool and use it well, we can have those things ready.

[00:48:02] We can go off and spend 10 minutes, writing the song about handwashing because we couldn't quite remember what that song was that we heard on Play School that one time. But we can have something that genuinely represents our space that's really contextual which is, oftentimes missing, especially when we're thinking about accessing professional learning as well. We can now create professional learning that is contextually relevant to ourselves by using these tools. But that AI literacy has to underpin all of it.

[00:48:33] Tash: Really valid. We do have a question from Beth. So she asked in your opinions, are we not losing the ability to critically think and she'd really love your thoughts on that analysis. So Hayden, if you wanna start us off?

[00:48:50] Hayden: Yeah. We'll lose it if we don't use it. And there is the easy path towards just reflexively accepting AI output. But I would flip that question on its head and say, AI use could increase critical thinking because it's another thing to be critical of.

[00:49:11] You have to be at the starting point to begin with though. So to get to that space, you need to be aware of the limitations of AI. You need to be aware that this is something I should be hyper critical of. So absolutely we've run the risk if we use AI inappropriately, and this is where it's on us as educators to ensure that we're making students accountable for the thinking in a classroom and the thinking in a centre.

[00:49:48] Sarah Louise: And I, yeah, I wholeheartedly agree. I don't think I can add much more except to say that as educational leaders in particular, this is probably going to be one of the biggest challenges because we know that young people are using AI tools. They are entering our sector, which is amazing to have new people in the sector with us.

[00:50:06] But yeah, if it's being used incorrectly, yes, we will lose the ability to critically think. And so our role then as educational leaders or pedagogical leaders of spaces, is to engage in this dialogue, use the ethical decision-making tool. In what way are we using it to support our critical thinking rather than to replace our critical thinking.

[00:50:28] Hayden: Yeah. I look to my science classroom where pretty much the whole thing is built on questions. As soon as you got an answer, you got yourself a problem. And I look at ChatGPT as being a very handy way to get some questions rolling. And to be critical of answers. Yep. I think you're dead right that we run a big risk of just letting it loose in a classroom. But I think it's a risk that, if curtailed properly, turns into a huge advantage.

[00:51:05] Tash: Really great question, Beth. I think that there is a long way for us to go in investigating AI and how it fits into early childhood education. But I love all of the resources, the thoughts the vulnerability that you've both shown in sharing your insights. I really hope that everyone who's joined us today, everyone who watches this webinar at a later time, takes this as a discussion point and really takes it back to their team to firstly establish, 'Are we ready for AI?'

[00:51:36] And if we are, what do we need to do and how can we further investigate? How can we become AI literate? How can we start researching and make sure that we're not trying to use this tool to take over our role, but more enhance our role and our learning experiences. So you both have been invaluable in your knowledge and your sharing.

[00:51:56] We don't have any more questions, so I'm gonna thank everyone for joining us today. This does wrap up our ECEC Conversations for 2025. So thank you to everyone who has joined us for our four conversations held so far this year. They will be returning in 2026 with a new lineup. So if you have any insights that you'd like to share, if there's topics that you'd like us to be covering, please reach out.

[00:52:21] We're always open to your feedback. For today, thank you so much, Hayden. Thank you so much, Sarah. This has been incredibly amazing to be part of.

[00:52:32] Hayden: No worries. Thanks, Tash, for organising. Thanks, Sarah, for being a wonderful co-presenter.

[00:52:39] Sarah Louise: Thank you both.

[00:52:40] Tash: Excellent. Have a great afternoon everyone.

[00:52:42] Hayden: Thanks everyone. And I've just put those resources in the Q & A as well. If anyone wants to look at Suno or NotebookLM.

[00:52:50] Tash: Thanks, Hayden. Bye, everyone.

This field is for validation purposes and should be left unchanged.