PixelProf

@PixelProf@lemmy.ca
1 Post – 63 Comments
Joined 13 months ago

Education has a fundamental incentive problem. I want to embrace AI in my classroom. I've been studying ways of using AI for personalized education since I was in grade school. I wanted personalized education, the ability to learn off of any tangent I wanted, to have tools to help me discover what I don't know so I could go learn it.

The problem is, I'm the minority. Many of my students don't want to be there. They want a job in the field, but don't want to do the work. Your required course isn't important to them, because they aren't instructional designers who recognize that this mandatory tangent is scaffolding the next four years of their degree. They have a scholarship, and can't afford to fail your assignment to get feedback. They have too many courses, and have to budget which courses to ignore. The university holds a duty to validate that those passing the courses met a level of standards and can reproduce their knowledge outside of a classroom environment. They have a strict timeline - every year they don't certify their knowledge to satisfaction is a year of tuition and random other fees to pay.

If students were going to university to learn, or going to highschool to learn, instead of being forced there by societal pressures - if they were allowed to learn at their own pace without fear of financial ruin - if they were allowed to explore the topics they love instead of the topics that are financially sound - then there would be no issue with any of these tools. But the truth is much bleaker.

Great students are using these tools in astounding ways to learn, to grow, to explore. Other students - not bad necessarily, but ones with pressures that make education motivated purely by extrinsic factors than intrinsic - have a perfect crutch available to accidentally bypass the necessary steps of learning. Because learning can be hard, and tedious, and expensive, and if you don't love it, you'll take the path of least resistance.

In game design, we talk about not giving the player the tools to optimize their fun away. I love the new wave of AI, I've been waiting for this level of natural language processing and generation capability for a very long time, but these are the tools for students to optimize the learning away. We need to reframe learning and education. We need to bring learning front and center instead of certification. Employers need to recognize this, universities need to recognize this, highschools and students and parents need to recognize this.

10 more...

I almost exclusivity self-checkout for groceries, and it had drastically sped up my checkout time as most people in my area opt to use traditional checkout and the stores are still keeping lots of lanes open (just closing the express lanes). The last 3 times I've used a non-self checkout, each time I was double charged for items or didn't have reduced prices applied and didn't notice because I was bagging.

5 more...

It depends what "From Scratch" means to you, as I don't know your level of programming or interests, because you could be talking about making a game from beginning to end, and you could be talking about...

  • Using a general purpose game engine (Unity, Godot, Unreal) and pre-made assets (e.g., Unity Asset Store, Epic Marketplace)?
  • Using a general purpose game engine almost purely as a rendering+input engine with a nice user interface and building your own engine overtop of that
  • Using frameworks for user input and rendering images, but not necessarily ones built for games, so they're more general purpose and you'll need to write a lot of game code to put it all together into your own engine before you even starting "Making the game", but offer extreme control over every piece so that you can make something very strange and experimental, but lots of technical overhead before you get started
  • Writing your own frameworks for handling user input and rendering images... that same as previous, but you'll spend 99% of your time trying to rewrite the wheel and get it to go as fast as any off the shelf replacement

If you're new to programming and just want to make a game, consider Godot with GDScript - here's a guide created in Godot to learn GDScript interactively with no programming experience. GDScript is like Python, a very widely used language outside of games, but it is exclusive to Godot so you'll need to transfer it. You can also use C# in Godot, but it's a bigger learning curve, though it is very general and used in a lot of games.

I'm a big Godot fan, but Unity and Unreal Engine are solid. Unreal might have a steeper learning curve, Godot is a free and open-source project with a nice community but it doesn't have the extensive userbase and forum repository of Unity and Unreal, Unity is so widely used there's lots of info out there.

If you did want to go really from scratch, you can try using something like Pygame in Python or Processing in Java, which are entirely code-created (no user interface) but offer lots of helpful functionality for making games purely from code. Very flexible. That said, they'll often run slow, they'll take more time to get started on a project, and you'll very quickly hit a ceiling for how much you can realistically do in them before anything practical.

If you want to go a bit lower, C++ with SDL2, learning OpenGL, and learning about how games are rendered and all that is great - it will be fast, and you'll learn the skills to modify Godot, Unreal, etc. to do anything you'd like, but similar caveats to previous; there's likely a low ceiling for the quality you'll be able to put out and high overhead to get started on a project.

Getting really speculative, but maybe Infinite Scrolling and similar UX design patterns. I think we learned it was dangerous pretty early in, but I have a feeling there isn't currently a widespread understanding of just how badly things like infinite scrolling shortcircuit parts of the brain and cause issues with attention and time regulation in large populations.

If I was more researched on it, I might include infinite short-form content feeds of almost any type to be honest, which may just be another way of saying social media.

I know it's controversial, but moving away from "guys" when I address a group and more or less defaulting to "they" when referring to people I don't know.

They was practical, because I deal with so many students exclusively via email, and the majority of them have foreign names where I'd never be able to place a gender anyways if they didn't state pronouns.

Switching away from guys was natural, but I'm in a very male dominated field and I'd heard from women students in my undergrad that they did feel just a bit excluded in a class setting (not as much social settings) when the professor addresses a room of 120 men and 5 women with "Guys", so it just more or less fell to the side in favour of folks/everyone.

10 more...

Or, hourly = extremely high paid contract work.

1 more...

I think he's basically saying that it's racist to "artificially" integrate communities, because (I think he's saying) if they need to be integrated, then that's the same as saying that black folks are necessarily inferior. I don't think he's trying to say they're inferior, but that laws forcing integration are based on that assumption. So he can be well educated and successful because he isn't inherently inferior, therefore there is no need for forced integration.

... Which is such a weird stretch of naturalism in a direction I wasn't ready for. Naturalist BS is usually, "X deserves fewer rights because they are naturally inferior", whereas this is "We should ignore historical circumstances because X is not naturally inferior".

Start a game of monopoly after three other players have already gone around the board 10 times and created lots of rules explicitly preventing you from playing how they did and see how much the argument of "well, to give you any kind of advantage here would just be stating you're inferior, and we can't do that."

Man probably got angry at his golf handicap making him feel inferior and took things too far. Among other things.

1 more...

It's also tough to reconcile that I may thrive in high pressure situations, but they're still exhausting and I don't like them, and definitely not being dependent on them to do anything. Medication helped the minute to minute, but the week to week is still a total blur.

I understand that he's placing these relative to quantum computing, and that he is specifically a scientist who is deeply invested in that realm, it just seems too reductionist from a software perspective, because ultimately yeah - we are indeed limited by the architecture of our physical computing paradigm, but that doesn't discount the incredible advancements we've made in the space.

Maybe I'm being too hyperbolic over this small article, but does this basically mean any advancements in CS research are basically just glorified (insert elementary mechanical thing here) because they use bits and von Neumann architecture?

I used to adore Kaku when I was young, but as I got into academics, saw how attached he was to string theory long after it's expiry date, and seeing how popular he got on pretty wild and speculative fiction, I struggle to take him too seriously in this realm.

My experience, which comes with years in labs working on creative computation, AI, and NLP, these large language models are impressive and revolutionary, but quite frankly, for dumb reasons. The transformer was a great advancement, but seemingly only if we piled obscene amounts of data on it, previously unspeculated of amounts. Now we can train smaller bots off of the data from these bigger ones, which is neat, but it's still that mass of data.

To the general public: Yes, LLMs are overblown. To someone who spent years researching creativity assistance AI and NLPs: These are freaking awesome, and I'm amazed at the capabilities we have now in creating code that can do qualitative analysis and natural language interfacing, but the model is unsustainable unless techniques like Orca come along and shrink down the data requirements. That said, I'm running pretty competent language and image models on 12GB of relatively cheap consumer video card, so we're progressing fast.

Edit to Add: And I do agree that we're going to see wild stuff with quantum computing one day, but that can't discount the excellent research being done by folks working with existing hardware, and it's upsetting to hear a scientist bawk at a field like that. And I recognize I led this by speaking down on string theory, but string theory pop science (including Dr. Kaku) caused havoc in people taking physics seriously.

5 more...

Poorly.

More seriously, I didn't know I had ADHD, but I'd kind of naturally contorted my world to support it as best as I could. I worked flexible, four months contracts. I only worked in low-stakes positions where leaving after a few months was expected. When I was a young kid, I was really good at convincing teachers that they didn't need to see my homework or that I needed an extra day, because even though the work was trivial, I wouldn't do it until the day after a deadline.

I've minimized obligations where I can, like autopay for every bill, I don't drive to avoid having to take it to the shop and do maintenance, I rent so that I'm not on the hook for maintenance, and I chronically overthink purchases to avoid impulse spending most of the time, at the sacrifice of not getting things I probably need.

I'm still working on it, but I think reducing the places where you can really mess things up on a bad brain day and doing what you can to nurture an environment where you can follow your rhythms is important. Way easier said than done, of course.

I'm at least happy to see some decent, really cheap (<$100 CAD) smart phones popping up that are competent enough to work with, but it's still such a single point of failure for so many aspects of life right now. Not even not having a phone, but a dead battery (and inability to swap it out with a backup like you used to), spontaneously breaking, losing cell service at an inopportune time to access your virtual tickets and things.

I don't mind smart phones, but the single point of failure for so much is really not good.

1 more...

If I forget to eat, a few spoons of peanut butter seems to work pretty well. I do my best to have some.kind of hard boiled eggs, little protein heavy pucks of oatmeal in the freezer, or protein bars to grab. It really makes a big difference (Vyvanse).

I agree with the recommendation for talking to the doctor; I'm in a similar position. This might be completely unrelated to your situation, so take it with many grains of salt.

What I've been learning is that a lot of my own burnout cycling seems to be cycles of intense and constant masking. I struggle with social situations, following "chronos" time instead of "Kairos" (if you will entertain the misuse of these), and eventually get so buried in obligations and actively resisting my natural impulses that my wants and needs get muddied and untended to and the pain of pretending or entering social modes builds up too much. Being in a mode, "professor mode", "friend mode", "colleague mode", "spouse mode", they all build up the tension and while I can seem socially adept, if I'm not in a mode I'm pretty useless. My Uber drivers can attest. Sometimes I do need a break from social activities while I break down the mask that builds up. A decalcifying of the brain. My friends seem to understand.

I'm a teaching professor, and I love the work, it has high flexibility but strict accountability, lots of room to experiment and find novelty, but a massive social burden and ridiculous workload. The only thing I've found to help so far, besides medication, is doing less. Fewer work obligations meant more time to de-mask, it meant I could take the more time on my tasks that I refused to admit to myself I needed compared to colleagues, more time to do stupid random impulsive (but safe) BS which I've found is the most relaxing to me, and that naturally led to meditating, exercising, and eating a bit healthier, which made things feel more manageable.

I don't know what the long term prospects of this realization are for me, but consider that ADHD usually means that tasks will take longer and more effort than typical people. Admitting to myself that it's a disability and I don't need to work twice or three times as hard as other people to make up for it all the time has been really important.

You also mentioned trauma; a lifetime of letting people down without knowing why really turned me into an over-supporter as an adult. Fawning response to stress - feeling the stress build up and instinctively doing whatever you can to help other people at cost of yourself, rather than fighting or running away or freezing up - and then when you're alone you fight yourself, freeze, or run away from everything. I've been told it's a form of invisible self harm, and it's nefarious because the goal is to make everyone else see things as all right.

So I don't know if any of this clicks true for you, and I don't fully know the solution to these, but awareness of my own issues has helped a lot, and I think awareness and recognition is key to getting started. Years of therapy, meditation, medication, it's all making progress, but it's slow, and awareness has been key to any of the positives. For me, it seems like working less and admitting to myself I have a disability, undoing years of traumatic people pleasing at my own expense, and learning to unmask more in social interactions and at home are key, however that path is treaded.

Yeah, I think framing it similar to the old days might help, but I could be wrong. Like, you aren't signing up for (just to web-equivalent) PHP Fusion or something, you're signing up for your gaming clan's forum, or your roleplay group, or your Canadian phreak BB. The difference with Lemmy is just that you also indirectly sign up to receive content from a lot of other places using the same protocol.

IMO, I think the framing/abstraction will make or break the future of the paradigm for mainstream consumption. Not to get into another repeat of the EEE discussion, but assuming nothing nefarious from something like Threads, that would mean people start an account there and then find a niche group with their friends to go hang out on instead.

I also have to push back against the pushback against the paradigm going mainstream, because again IMO a move back toward decentralized platforms is really important for the future of the internet and quite frankly the global economy.

Just editing to expand, but I think maybe there's a problem in framing Lemmy or Mastodon as communities in themselves, because it really conflicts with the model of instancing and email that is being used to describe them.

1 more...

For me, it's the next major milestone in what's been a roughly decade-ish trend of research, and the groundbreaking part is how rapidly it accelerated. We saw a similar boom in 2012-2018, and now it's just accelerating.

Before 2011/2012, if your network was too deep, too many layers, it would just breakdown and give pretty random results - it couldn't learn - so they had to perform relatively simple tasks. Then a few techniques were developed that enabled deep learning, the ability to really stretch the amount of patterns a network could learn if given enough data. Suddenly, things that were jokes in computer science became reality. The move from deep networks to 95% image recognition ability, for example, took about 1 years to halve the error rate, about 5 years to go from about 35-40% incorrect classification to 5%. That's the same stuff that powered all the hype around AI beating Go champions and professional Starcraft players.

The Transformer (the T in GPT) came out in 2017, around the peak of the deep learning boom. In 2 years, GPT-2 was released, and while it's funny to look back on now, it practically revolutionized temporal data coherence and showed that throwing lots of data at this architecture didn't break it, like previous ones had. Then they kept throwing more and more and more data, and it kept going and improving. With GPT-3 about a year later, like in 2012, we saw an immediate spike in previously impossible challenges being destroyed, and seemingly they haven't degraded with more data yet. While it's unsustainable, it's the same kind of puzzle piece that pushed deep learning into the forefront in 2012, and the same concepts are being applied to different domains like image generation, which has also seen massive boosts thanks in-part to the 2017 research.

Anyways, small rant, but yeah - it's hype lies in its historical context, for me. The chat bot is an incredible demonstration of the incredible underlying advancements to data processing that were made in the past decade, and if working out patterns from massive quantities of data is a pointless endeavour I have sad news for all folks with brains.

1 more...

I agree with this, but we'd need to draw lines in the analogy. For example, my CS students struggle with downloading and installing a program and don't know how to locate find files that they've saved in a text editor. We'd be concerned if the people driving didn't know where their turn signal was, hah.

A lot of students grew up using Chromebooks as their primary computer, so they're largely limited to app stores and web browsers.

You check the clock. You check again, because you didn't actually read the time because you were too absorbed in the process of checking the clock that you forgot to check the clock.

You check the clock again. You have a new email. You consider checking the clock again, but give up and accept your fate because checking the clock a (second? Third? Tenth? First?) time is just too much right now, you're already running late anyways so it was kind of all procrastinating in the first place. You don't even know what you were supposed to be checking it for. Just wait and see, it's probably not that important. Maybe you'll check the clock and see if it sparks your memory.

You check the clock. You finally see the time. The bus drives past you.

There's the points others have made about the business model - for a long time, the "momentum" oriented approach was essentially a Ponzi scheme where investors would invest in a business that would take the risk of major losses so that they could destroy all competition in a space, then eventually, turn a profit by changing their tactics in user-unfriendly ways long down the line since you have the monopoly.

For this particular issue, though, I think we're seeing the Rotten Tomatoes effect en masse. If you want to make something bold and impressive, you need something people love or hate - not something between. With Rotten Tomatoes as an example, it's binary - Positive or Negative. This incentivizes movie production to produce things that are not controversial, just things that people won't strongly dislike.

With centralized platforms, the product models stopped being about providing high quality products and began valuing time spent on the platform. Produce a website/platform that most people are okay with and the majority aren't extremely opposed to. This means it won't do anything bold, but it does mean you'll pick up a critical mass and become the dominant force, as you're appealing to the majority.

In a content-driven economy, whoever has the users and the content rides that positive feedback loop to monopolies. More users = more content = more users.

Algorithms get worse because they're appealing to "Good Enough". If it gets bold and suggests something that you might either love or hate, then you might hate it and leave the site for a bit, but if everything is good enough, you'll stick around. Web design gets blander because things get familiar, and especially after the start of Facebook, we learned that people really choose familiarity over novelty. Movies, TV, and Music get blander because they are now driven by the same platform economics where sticking around on the platform is valued more than appreciating the content of the platform.

Oh for sure. And it's a great realm to research, but pretty dirty to rip apart another field to bolster your own. Then again, string theorist...

Lots of immediate hate for AI, but I'm all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the "general purpose" LLMs, they'd be much more efficient at their jobs. I've been rocking local LLMs for a while and they've been great as a small compliment to language processing tasks in my coding.

Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I'm sure there's many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but "we won't send your data anywhere" seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

But there's a lot of speculation in this comment. Mozilla's done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn't lead things astray too hard.

I might be crazy, but I'm wondering if we'll bypass this in the long run and generate 2D frames of 3D scenes. Either having a game be low-poly grayboxed and then each frame is generated by an AI doing image-to-image to render it out in different styles, or maybe outright "hallucinating" a game and it's mechanics directly to rendered 2D frames.

For example, your game doesn't have a physics engine, but it does have parameters to guide the game engine's "dream" of what happens when the player presses the jump button to produce reproducible actions.

I sit somewhere tangential on this - I think Bret Victor's thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that's better suited for the computer anyways. I get it's not as valid here as other use cases, but there's some room for improvements.

Having it in separate functions is more testable and maintainable and more readable when we're thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can't we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can't we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I'm leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style... and not just OOP. I'd love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.

My cynical guess is that's what they're hoping the community will do ("like lemmings, I tell you!" - spez, probably) to drive higher traffic numbers before some announcement or meeting.

I certainly used to, and used to think it was essentially gender neutral, but again - in certain contexts like a male dominated classroom, the women/nb students could easily feel excluded by it. Outside of that, I also recognized my trans friends had a lot of thoughtless people intentionally misgendering them on the regular just to be mean, and finding small ways to reduce that reinforcement felt better than not. It was also surprisingly not that tough for me to adopt the more neutral language, so if it's a subtle help with no skin off my back it just seems very win-win.

I think it would be a great system to easily donate to instance hosts if it was supported as an instance opt-in feature.

Soap. 100%.

100%, and this is really my main point. Because it should be hard and tedious, a student who doesn't really want to learn - or doesn't have trust in their education - will bypass those tedious bits with the AI rather than going through those tedious, auxiliary skills that you're expected to pick up, and use the AI was a personal tutor - not a replacement for those skills.

So often students are concerned about getting a final grade, a final result, and think that was the point, thus, "If ChatGPT can just give me the answer what was the point", but no, there were a bunch of skills along the way that are part of the scaffolding and you've bypassed them through improper use of available tools. For example, in some of our programming classes we intentionally make you use worse tools early to provide a fundamental understanding of the evolution of the language ergonomics or to understand the underlying processes that power the more advanced, but easier to use, concepts. It helps you generalize later, so that you don't just learn how to solve this problem in this programming language, but you learn how to solve the problem in a messy way that translates to many languages before you learn the powerful tools of this language. As a student, you may get upset you're using something tedious or out of date, but as a mentor I know it's a beneficial step in your learning career.

Maybe it would help to teach students about learning early, and how learning works.

1 more...

I'm really torn on this, because on one hand the over generalization of ADHD prevented me - and is still preventing me - from taking my own diagnosis too seriously, but that same information got me to at least think about it and get a consult with a psychiatrist on it in the first place.

It helped the diagnosis but not the feelings of being an imposter post-diagnosis.

2 more...

Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it's high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn't at the same quality bar or style as the training code.

On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it's continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.

I know this post and comment might sound shilly but switching to more expensive microfibre underwear actually made a big impact on my life and motivated me to start buying better fitting and better material clothes.

I'd always bought cheap and thought anything else was silly. I was wrong. So much more comfortable, I haven't had a single pair even begin to wear down a little bit, less sweating and feel cleaner, fit better, and haven't been scrunchy or uncomfortable once compared to the daily issues of that cheap FotL life. This led to more expensive and longer lasting socks with textures I like better, better fitting shoes that survive more than one season.

It was spawned by some severe weight loss and a need to restock my wardrobe. My old underwear stuck around as backups to tell me I needed to do laundry, but going back to the old ones was bad enough that I stopped postponing laundry.

Basically, I really didn't appreciate how much I absolutely hated so many textures I was constantly in contact with until I tried alternative underwear and realized you don't have to just deal with that all the time.

This is a very output-driven perspective. Another comment put it well, but essentially when we set up our curriculum we aren't just trying to get you to produce the one or two assignments that the AI could generate - we want you to go through the motions and internalize secondary skills. We've set up a four year curriculum for you, and the kinds of skills you need to practice evolve over that curriculum.

This is exactly the perspective I'm trying to get at work my comment - if you go to school to get a certification to get a job and don't care at all about the learning, of course it's nonsense to "waste your time" on an assignment that ChatGPT can generate for you. But if you're there to learn and develop a mastery, the additional skills you would have picked up by doing the hard thing - and maybe having a Chat AI support you in a productive way - is really where the learning is.

If 5 year olds can generate a university level essay on the implications of thermodynamics on quantum processing using AI, that's fun, but does the 5 year old even know if that's a coherent thesis? Does it imply anything about their understanding of these fields? Are they able to connect this information to other places?

Learning is an intrinsic task that's been turned into a commodity. Get a degree to show you can generate that thing your future boss wants you to generate. Knowing and understanding is secondary. This is the fear of generative AI - further losing sight that we learn though friction and the final output isn't everything. Note that this is coming from a professor that wants to mostly do away with grades, but recognizes larger systemic changes need to happen.

Yeah, I really think it's important to not see Lemmy as one singular community, or a lot of important use cases will go ignored.

Only when it's intentionally censored and trained to react in a particular way. When it's not, you remember it was trained on random internet content.

Every time. Try to get ahead of your work? Well, good for you, that first 20% went really well, now let's spend the next two weeks on "work" that interferes with your other needs and needs to get thrown out because there's no way it's integrating with the other 80% that needs to happen within the next hour and also everything that you did for the other 20% is useless and needs to be redone now that you broke it with that tangent.

It's been a painful summer "preparing" to teach my fall courses.

Interesting - I've been thinking about trying to decentralize lately, and been having fun collecting my data from sites to analyze my own behaviours in data and build unique recommendation engines for myself and was recently thinking about trying to build a crawler and DIY search engine for myself. Any tips/pitfalls on getting started with that?

2 more...

Same! If you know of any online courses suitable for postsecondary students looking to build tech skills I would appreciate it, otherwise I might need to try getting a duty reallocation for a bit to put time into building one.

I appreciate the comment, and it's a point I'll be making this year in my courses. More than ever, students have been struggling to motivate themselves to do the work. The world's on fire and it's hard to intrinsically motivate to do hard things for the sake of learning, I get it. Get a degree to get a job to survive, learning is secondary. But this survival mindset means that the easiest way is the best way, and it's going to crumble long-term.

It's like jumping into an MMORPG and using a bot to play the whole game. Sure you have a cap level character, but you have no idea how to play, how to build a character, and you don't get any of the references anyone else is making.

Hmm... Nothing off the top of my head right now. I checked out the Wikipedia page for Deep Learning and it's not bad, but quite a bit of technical info and jumping around the timeline, though it does go all the way back to the 1920's with it's history as jumping off points. Most of what I know came from grad school and having researched creative AI around 2015-2019, and being a bit obsessed with it growing up before and during my undergrad.

If I were to pitch some key notes, the page details lots of the cool networks that dominated in the 60's-2000's, but it's worth noting that there were lots of competing models besides neural nets at the time. Then 2011, two things happened at right about the same time: The ReLU (a simple way to help preserve data through many layers, increasing complexity) which, while established in the 60's, only swept everything for deep learning in 2011, and majorly, Nvidia's cheap graphics cards with parallel processing and CUDA that were found to majorly boost efficiency of running networks.

I found a few links with some cool perspectives: Nvidia post with some technical details

Solid and simplified timeline with lots of great details

It does exclude a few of the big popular culture events, like Watson on Jeopardy in 2011. To me it's fascinating because Watson's architecture was an absolute mess by today's standards, over 100 different algorithms working in conjunction, mixing tons of techniques together to get a pretty specifically tuned question and answer machine. It took 2880 CPU cores to run, and it could win about 70% of the time at Jeopardy. Compare that to today's GPT, which while ChatGPT requires way more massive amounts of processing power to run, have an otherwise elegant structure and I can run awfully competent ones on a $400 graphics card. I was actually in a gap year waiting to go to my undergrad to study AI and robotics during the Watson craze, so seeing it and then seeing the 2012 big bang was wild.

My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that "Spatial Computing" is the next paradigm for work.

I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

It's just limited. Streaming apps aren't very good, there's no great source for 3D movies (which are great, when Bigscreen had them anyways), they're still a bit too hot and heavy for long-term use, the game library isn't very broad and there haven't been many killer app games/products that distinct it from other modalities, and it's going to need a critical amount of adoption to get used in remote meetings.

I really do think it's huge for given a sense of remote presence, and I'd love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

They did try, though, and I think they're on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they're first selling the idea, and then maybe there will be a break. I'll admit the industry is moving much slower than I'd anticipated back in 2012 when I was starting VR research.

Yeah, a lot of my systems have been built up by noticing bad patterns and finding easier alternatives. A frozen curry that takes 10 minutes of effort tops, with pre-made masala paste - it may not be the most satisfying, but it's costing me about $4, I'll be eating in less time than ordering in, and I won't get stuck looking at menus for an hour.