How many of you are using ChatGpt to help you with your work, and not telling your boss/co-workers?

dbilitated@aussie.zone to Asklemmy@lemmy.ml – 359 points –

Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!

187

High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

How has this been received?

I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

Chat GPT is the same. I would never use the content it makes without verifying that content first.

Is it not already? I've found it to be far less reliable than Wikipedia.

This is why I read through everything I use to make sure it’s accurate.

Well the people that use it know that, but for the average person, chatgpt still has a high reputation

Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

I had professors do different wordings for questions throughout college, I never encountered a professor or TA that wouldn’t clarify if asked, and, generally, the amount of confusing questions evened out across all of the versions, especially over a semester. They usually aren’t doing it to trick students, they just want to make it harder for one student to look at someone else’s test.

There is a risk of it negatively impacting students, but encouraging students to ask for clarification helps a ton.

My professors would randomize the order of the questions instead.

I have had professors that essentially create chiral A & B versions and also randomize the order. Never underestimate the amount of effort a lazy student will go through to cheat.

I use ChatGPT to create banks of questions that are aligned to the essential topics that I need students to learn. Then I randomly assign the same number of questions to each student from each essential topic. I give the students the list of topics to focus their studying on.

I also have other “categories” that form their final grade, things like participation and homework assignments. So any marginal unfairness that might result from randomized test questions is more that made up for over the course of everything I grade them on.

Sure it could but the same issue is present with one question. Some students will get the wording or find it easy others may not. Having a test in groups to limit cheating is very common and never led to any problems as far as my anecdotal evidence goes.

You’re increasingly the odds by changing the wording. I don’t see why it’s necessary. Just randomize the order of the questions would suffice.

I'm a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I'm done, about an hour after I started.

I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.

I wish I had that much opportunity to write (or fabricate) my own teaching material. I'm in a standardized testing hellscape where almost every month there's yet another standardized test or preparation for one. debord-tired

It’s one of the fascinating paradoxes of education that the more you teach to standardized tests, the worse test results tend to be. Improved test scores are a byproduct of strong teaching - they shouldn’t be the only focus.

Teaching is every bit as much an art as it is a science and straight-jacketing teachers with canned curricula only results in worse test scores and a deteriorated school experience for students. I don’t understand how there are admins out there that still operate like this. The failures of No Child Left Behind mean we’ve known this for at least a decade.

1 more...

I don't have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.

Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I'm being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn't matter. I'm paid for results.

A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.

Classic over explaining to cover up a lie.

I never send anything other than "I'll be out of the office today" for every PTO notice.

Exactly and lets me honest you coworkers don't want to heard about you explosive diarrhea problems or the weird mole on your butt.

Ask yourself why they felt the need to generate an AI sick note instead of being honest 👌

I dunno, I'd consider it a moral failing on the part of the person who couldn't be honest and direct, even if there's a cultural issue in the workplace.

Dunno, everyone else seems to be happy sending a one-liner 👌

Exactly, if they're too lazy to write a fake sick note then they're certainly too lazy to work, either send them in for remediation or terminate them, either way they shouldn't be in the workplace

I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.

I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.

It's just predictive text, it's not really AI.

I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

It's definitely not artificial general intelligence, but it's for sure AI.

None of the criteria you mentioned are needed for it be labeled as AI. Definition from Oxford Libraries:

the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

It definitely fits in this category. It is being used in ways that previously, customer support or a domain expert was needed to talk to. Yes, it makes mistakes, but so do humans. And even if talking to a human would still be better, it's still a useful AI tool, even if it's not flawless yet.

It just seems to me that by this definition, the moment we figure out how to do something with a computer, it ceases to be AI because it no longer requires human intelligence to accomplish.

I guess the word "normally" takes care of that. It implies a situation outside of the program in question.

i think learning where it can actually help is a bit of an art - it's just predictive text, but it's very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you're right though, it doesn't offer much if you don't already know what you need.

Can you hand me an example? I keep hearing this but every time somebody presents something, be it work related or not, it feels like at best it would serve as better lorem ipsum

I’ve had good success using it to write Python scripts for me. They’re simple enough I would be able to write them myself, but it would take a lot of time searching and reading StackOverflow/library docs/etc since I’m an amateur and not a pro. GPT lets me spend more time actually doing the things I need the scripts for.

A use it with web development by describing what I want something to look like and have it generate a React component based on my description.

Is what it gives me the final product? Sometimes, but it’s such a help to knock out a bunch of boilerplate and get me close to what I want.

Also generating documentation is nice. I wanted to fill out some internal wiki articles to help people new to the industry have something to reference. Spent maybe an hour having a conversation asking all of the questions I normally run into. Cleaned up the GPT text, checked for inaccuracies, and cranked out a ton of resources. That would have taken me days, if not weeks.

At the end of the day, GPT is better with words than I am, but it doesn’t have the years of experience I have.

More often than not you need to be very specific and have some knowledge on the stuff you ask it.

However, you can guide it to give you exactly what you want. I feel like knowing how to interact with GPT it’s becoming similar as being good at googling stuff.

Isn't that what humans also do and it's what makes us intelligent? We analyze patterns and predict what will come next.

I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.

The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.

I've been using it a little to automate really stupid simple programming tasks. I've found it's really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there's an awful lot of dumb code that needs to be written and it's certainly easier than doing it by hand.

As long as you're very precise about what you want, you don't expect too much, and you check its work, it's a pretty useful tool.

I've found it useful for basically finding the example code for a 3rd party library. Basically a version of Stack Exchange that can be better or worse.

I essentially use it as interactive docs. As long as what you're learning existed before 2021 it's great.

Yeah sadly the times I've gotten screwed is when a major version change occurred in 2022. Got burned once doing that and now I know to check to see if we have upgraded past the version the code works before spending too much time working on it.

I don't know you, the language you use, nor the way you use chat gpt, but I'm a bit surprised at what you say. I've been using chatgpt on a nearly daily basis for months now and while it's not perfect, if the task isn't super complicated and if it's described well, after a couple of back and forth I usually have what I need. It works, does what is expected, without being an horrendous way to code it.

And gpt4 is even better

My job involves a lot of shimming code in between systems that are archaic, in-house, or very specific to my industry (usually some combination of the three), so the problems I'm usually solving don't have much representation in gpt's training data. Sometimes I get to do more rapid prototyping/sandbox kind of work, and it's definitely much more effective there where I'm (a) using technologies that might pop up on stack overflow and (b) don't have a set of arcane constraints the length of my arm to contend with.

I'm absolutely certain that it's going to be a core part of my workflow in the future, either when the tech improves or I switch jobs, but for right now the most value I get out of it is as effectively a SO search tool.

Got it. With context, it makes much more sense.

I myself use some of the most widely used programming language ( php and react mostly ) so yhea, there's plenty to be found with those

I, like most peaple, find it easier to write code than to read it. That "check its work" step means more work actually, for me

not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.

Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Write the code

It would be

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Start writing the code and wait for the auto complete
  4. Read the auto complete and decide if it does exactly what I want
  5. Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself

idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.

I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets

EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form

Those different sets of steps basically boil down to a student finding all the ways they can to cheat and spending hours doing it, when they could have just used less time to study for the test.

Not saying that you're cheating, just that it's the same idea. Usually the quickest solution is to just tackle the thing head-on rather than find the lazy workaround.

What I think ChatGPT is great for in programming is ‘I know what I want to do but can’t quite remember the syntax for how to do it’. In those scenarios it’s so much faster than wading through the endless blogspam and SEO guff that search engines deal in now, and it’s got much less of a superiority complex than some of the denizens of SO too.

As a side note, whilst I don't really use AI to help with coding, I was kinda expecting what you describe, more so for having stuff like ChatGPT doing whole modules.

You see, I've worked as a freelancer (contractor) most of my career now and in practice that does mostly mean coming in and fixing/upgrading somebody else's codebase, though I've also done some so-called "greenfield projects" (entirelly new work) and in my experience the "understanding somebody else's code" is a lot more cognitivelly heavy that "coming up with your own stuff" - in fact some of my projects would've probably gone faster if we just rewrote the whole thing (but that wasn't my call to make and often the business side doesn't want to risk it).

I'm curious if multiple different pieces of code done with AI actually have the same coding style (at multiple levels, so also software design approach) or not.

A lot of people are going to get fucked if they are...

It's using the "startup method" where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can't do today.

Eventually they'll introduce a paid version that might be able to do what the free one did.

But if you're just blindly trusting it, you might have months of low quality work and haven't noticed.

Like the lawyers recently finding out it would just make up caselaw and reference cases. We're going to see that happen more and more as resources are cut back.

Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.

When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.

Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.

It’s been notorious for doing that from the very beginning though

Anyone blindly trusting it is a grade A moron, and would’ve just found another way to fuck up whatever they were working on if ChatGPT didn’t exist.

ChatGPT is a tool, if someone doesn’t know what they’re doing with it then they are gonna break stuff, not ChatGPT.

This is exactly like people who defend Tesla by saying it's your fault if you believed their claims about what a Tesla can do...

Which isn't a surprise, there's a huge overlap between being gullible to believe either companies claims, and some people will vend over backwards to defend thos companies because of sink cost fallacy

I don’t know what OpenAI even claims that ChatGPT can do, but if you trust marketing from any company then you’re gonna get burnt.

I’m not defending the company in any way, more just defending that in general LLMs can be useful tools, but people need to make educated decisions and take a bit of responsibility.

That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.

Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.

1 more...
1 more...

Some of my co-workers use it, and it's fairly obvious, usually because they are putting out even more inaccurate info than normal.

Urgh one of my coworkers (technically client, but work closely alongside) clearly uses it for every single email he sends, and it's nauseating. He's crass and very poorly spoken in person, yet overnight all his email correspondence is suddenly robotic and unnecessarily flowery. I use it regularly myself, for fast building of Excel formulas and so forth, but please, don't dump every email into it.

Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.

I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.

The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.

I think it is fine to use ChatGPT if you know what you are doing, but if you don't know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.

I've had partners like that in the past. If ChatGPT didn't exist they would've found another way to cheat or avoid work.

The type of partner who takes the task you asked them to complete, posts the task description on an online forum and hope someone gives them the answer.

Yes but I think it is a bit different because it just lowers the bar for this a lot. You also really lose trust in everything once you realize that you have spent a lot of time interacting with and checking AI generated stuff without knowing.

I get that. Before ChatGPT if I had a bad partner it is very quickly obvious that their work is bad.

Now you might be tricked into thinking they're competent, which I can imagine is more frustrating because it's unpredictable.

I guess that right now people are overusing it as it's so new, but in the end the people who want to graduate without trying to learn will always try to abuse whatever tools they have to cheat. Usually they face the consequences at some point in their lives.

To really be successful you need to be curious enough to want to understand things at a deep level. With LLMs people who don't really care well learn even less than before.

This is the key with all the machine learning stuff going on right now. The robot will create something, but none of them have a firm understanding of right, wrong, truth, lies, reality, or fiction. You have to be able to evaluate its output because you have no idea if the robot's telling the truth or not at that moment. Images are pretty immune to this because everyone can evaluate a picture for correctness or realism, and even if it's a misleading photorealistic image, well, we've already had Photoshops for a long time. With text, you always have to keep in mind that the robot might be low quality or outright wrong, and if you aren't equipped to evaluate its answers for that, you shouldn't be using it.

Even with images, unless you're looking for it most people will miss glaring problems. It's like that basketball video psychology experiment.

The problem is definitely bigger with LLMs though since you need to be an expert to check the output for validity. I will say when it's right it saves a ton of time, but when it's wrong you need to know enough to tell.

Yes, LLMs are great as a research assistant if you know what to look for, but they're a horrible learning tool. It's even worse if you don't know the correct way to search for an answer, it will set you down a completely wrong path. I don't use any answer without cross referencing and testing it myself. I also rewrite most of the code it spits out too since a lot of it follows terrible programming patterns and outdated standards.

He should've at least looked at the code and tested it before sending it to you. Ugh. Hate doing assignments with people who do the bare minimum and just waste your time.

The problem with using it is that you might be sending company proprietary or sensitive information to a third party that's going to mine that information and potentially expose that information, either directly or by being hacked. For example, this whole thing with Samsung data: https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/

We've been instructed to use ChatGPT generically. Meaning, you ask it generic questions that have generic usage, like setting up a route in Express. Even if there is something more specific to my company, it almost always can be transformed into something more generic, like "I have a SQL DB with users in it, some users may have the 'age' field, I want to find users that have their age above 30" where age is actually something completely different (but still a number).

Just need to work carefully on ChatGPT.

I've done so on rare occasion, but every time it made stuff up. Wanted terraform examples for specific things... and it completely invented resource types that don't exist.

Based on the code I've seen from our devs, it must be getting worse. It's never produced acceptable quality imo, but the examples I've seen lately are laughably bad.

It's definitely gotten weirder lately, I think they keep training it on data and they're not looking into that data enough. That's the thing about AI though, it has to come up with an answer. You gave it a prompt. It will come up that answer even if it's not a good one

It's not a search engine or encyclopedia. If you want facts, you have to use the tools for facts.

I use it to write performance reviews because in reality HR has already decided the results before the evaluations.

I'm not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.

To be clear: I don't support this but it's the reality I live in.

This is exactly what I use it for. I have to write a lot of justifications for stuff like taking training, buying equipment, going on business travel, etc. - text that will never be seriously read by anyone and is just a check-the-box exercise. The quality and content of the writing is unimportant as long as it contains a few buzz-phrases.

Just chiming in as another person who does this, it's absolutely perfect. I just copy and paste the company bs competencies, add in a few bs thoughts of my own, and tell it to churn out a full review reinforcing how they comply with the listed competencies.

It's perfect, just the kinda bs HR is looking for, I get compliments all the time for them rofl.

Can you please elaborate on your experience of HR people deciding the results before the evaluation? Just curious

Sure!

It happens behind closed doors and never in writing to keep up the farce, but usually I'm given a paltry number of slots of people I can label as high performers. This is really a damn shame because most of my team members are great employees. This is used as a carrot to show that we do give raises and promotions after all, but the proportion is so small it's effectively zero. I'm very clear to my team that trying to becoming a top performer to get a promotion is a bad investment. I do my best to communicate the futility without actually saying it literally in such a way that it could get me into trouble.

Next, they use a spreadsheet to figure who they can probably underpay based on a heuristic likelihood that person would actually leave vs current market rates. These automatically become the low performers ahem satisfactory. You're penalized for being here longer or specializing in something with a small market. Everyone else falls somewhere between satisfactory and above average which makes little difference.

The performance reviews are merely weak documentation to show that somehow HR was "justified" by selectively highlighting strengths or weaknesses depending on the a priori decision of what your performance level was to be.

It's a huge tautology with only one meaningful conclusion: you will be underpaid, and it gets worse over time.

Thanks. This is great insight and tracks with some personal experience or experience of friends.

You could make a whole post about this topic, but I was curious what’s your advice to an employee that wants to do good work, but who doesn’t want to be taken advantage of?

The truth is you have to do good work for yourself because you care about the quality of your work. You work for you.

You separate the factors. You do good work for you because you care because a life doing things you don't care about is less meaningful.

Separately you look at pay. You leave when it's no longer worth staying, which for most people is about every two to three years at least for your early career.

1 more...

Only used it a couple of times for work when researching some broad topics like data governance concepts.

It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.

My bosses use it to write their emails (ESL).

ESL is actually a great use, although there's a risk someone might not catch a hallucination/weird tone issue. Still it would be really helpful there.

Best used in tandem with something like languagetool.org for the final revision.

yeah my biggest use case it quick summaries of things. it's great getting a few bullet points, and i miss details a lot less.

1 more...

When I'm pissed off I use it to make my emails sound friendly.

My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.

I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.

Yes, although there's been a huge spike in cancer diagnosis I've been giving out since doing so. Whoops!

I'm a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it's nice to be able to brainstorm and get ideas bounced back. I don't exactly have friends to do that with.

I do the same thing; it's been great. ChatGPT is often problematic in other scenarios because it will sometimes just make stuff up, but that's nothing but a positive for brainstorming D&D plots. I did tell my players though.

It's phenomenal for making statblocks for NPCs too. fleshes out the whole thing in seconds. spells, feats, abilities, everything.

I use Midjourney to create illustrations of what I'm trying to describe as well as NPCs and PCs.

Midjourney isn't free though, correct? I was thinking about doing this, but I'm also just bit behind the curve with image generation AI and just not sure about how to best go about it.

There's a free tier of you have patience.

Not at all. Had a few experiments, then we had a talk about it at work, decided fuck we're not giving these people our source code, and left it at that.

I mean in the end, all ChatGPT could reliably do was scavenge man-pages for me. Which is neat, but also a rather benign trick tbh.

And for summarizing man-pages I would just use tl-dr instead of going to chatgpt

I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.

I'm a devops engineer, use it daily. Not to write e-mails, but to frequently ask the best approach to solve an issue or bash/sql/anything queries. My boss and colleagues know about it and use it too though

A friend of mine just used it to write a script for an Amazing Race application video. It was quite good.

How the heck did it access enough source material to be able to imitate something that specific and do it well? Are we humans that predictable?

I use it for help with formal language sometimes, but I do not trust it and would never try to pass off a whole generated text as mine. I always review it and try to make it sound my own.

There was some issue that came up relating to network shares on a Windows domain that didn't make sense to me and a colleague. I asked GPT to describe why we were seeing whatever behavior and it defined the scope of the feature in a way that completely demystified my coworker. I'm a Mac and Linux guy, so while I could loosely grasp it, it was gone from my mind shortly after. Windows domains and file sharing has always been bizarre to me.

Anyway, we didn't hide it. He gave it credit when explaining the answer to the rest of the team in a meeting. This was around the end of last year. The company since had layoffs and I'm looking for a new job, but I did have it reformat my resume and it did a great job. I've never been great at page-layout stuff, as I'm a plain text warrior.

You can have ChatGPT edit a pdf input? I thought it only took plaintext. This sounds super helpful.

More like it took blocks of text and formatted them as bullet points and cleaned up muddled presentation. Sorry for not being clearer.

Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.

I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.

Last job, I ran simple tech support instructions through a reading analyser and also had instructions written for children. I even had screenshots with offensively large red arrows pointing at every step. I lliterally wrote those instructions so you don't have to read.

It still was too complex for some idiots.

I've made PDF guides, videos they can reference, and even had personal training, and it baffles me how some people just can't pay attention. If it was something complicated, i would understand, but it is just simple stuff like resetting your password, opening apps, finding documents, etc.

It's always the older people too for some reason. I know this is supposed to be a gen z/alpha thing, but these older people have no attention span and have the memory of a fish. I know they didn't grow up with this stuff, but cmon it's literally a step by step guide

1 more...

My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.

I have very few writing tasks that don't require careful consideration, so it's not super useful in my day to day. But it can be helpful to get the ball rolling on an outline or first draft so I'm not staring at a blank sheet of paper.

IMO this is one of its biggest strengths. Just having something started is a huge time saver

I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.

I think my best use case is creating regex just dump a bunch of examples, test if it's wrong and tell them what is wrong

re-builder in Emacs works really good for this because I'll usually have the text in a buffer already that I want to match or replace or adjust or select - I'm constantly using it.

2 more...

I've found ChatGPT is good for small tasks that require me to code in languages I don't use often and don't know well. My prime examples are writing CMakeLists.txt files, and generating regex patterns.

Also, if I want to write a quick little bash script to do something, it's much better at remembering syntax and string handling tricks than me.

I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there's one website that uses gpt that even draws diagrams for you, so you don't have to make 500 UML and class diagrams that take forever to create.

If only they would also understand what they’re delivering.

I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.

I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch

The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use

Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)

... did you use GPT-4 to write parts of this comment?

Lol no, that's just how I write. It's pretty wack sometimes; often a mix of slang and proper English. Prob because I read lots of nonfic books and am immersed in online culture

Trying to do our work with ChatGPT and then discussing the results has been our #1 topic of kitchen conversation all year.

I use ChatGPT fairly frequently. For example, I often have to write a business email. I'm usually pretty good at it. But sometimes I don't have the time or desire to find the right wording. This is where ChatGPT comes into play: I have trained my writing style using several examples and then simply have the quickly written emails beautified.

My boss doesn't know about it, but I don't hide it either. My company is very, very slow on the technical side and will only understand the benefits of AI in a few years.

Another person chiming in with the same use case. It's saved me SO MUCH time and it really helps get over the anxiety-related procrastination.

This is pretty much what I use it for too. Plus it was my ghost writer for sections of my website content and helped me update my bio for my proposal templates. I would never give it client information. But it sure is handy getting me over writer's block. I usually have it reword its answer 3 times and then I edit them together using the bits and pieces that work best.

My job actively encourages using AI to be more efficient and rewards curiosity/creative approaches. I'm in IT management.

Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.

Super unethical and unrepentant about it.

Super illegal in most jurisdictions too.

We openly use it and abuse of it from top to bottom of the company and for me add Co-Pilot to that as well

I use it as a search engine for the LLVM docs.

Works so much better than doxygen.

But it's no secret.

I am the boss and I've had to cajole a couple of my employees into using it.

Any employer that thinks using ChatGPT carefully and judiciously is a bad thing is mistaken. When it works it's a great productivity boost, but you have to know when to kick it to the curb when it starts hallucinating.

As a backend developer I use it to explain some SQL, dev processes that I should know but unsure on, or best practices for X.

SQL is my weakest link.

SQL is my weakest link.

Thought I was the only one.

English is not my first language. I use it to fix grammar and rephrase sentences for making communication easy.

The platform/language that I use doesn't supported by chat GPT or Bard. So I write my own code.

Not using chatgpt at all because it's queue is always full.

See this confuses the hell out of me. I've NEVER been prevented from using ChatGPT by a queue. It's always saying that it's a downside to not paying for it but seems like I just always choose the times that no one is using it.

I didn't even know there was a queue! Strange.

I use it

My boss likes it too. Of course we dont trust it m, but it can do certain things easier and faster than a human can

As a coder, we have had discussions about using it at work. Everyone's fine with it for generation of test data, or for generating initial code skeletons but it still requires us to review every line. It saves a bit of time but that's all.

Im using the shit out of gpt-4 for coding and it works. And no never told anyone cause nobody asks.

I've run emails through it to check tone since I'm hilariously bad at reading tone through text, but I'm pretty limited in how I can make use of that. There's info I deal with that is sensitive and/or proprietary that I can't paste into just any text box without potential severe repercussions.

Aside from asking it coding questions (which are generally a helpful pointer in the right direction), i also ask it alot of questions like “Turn these values into an array” or something similar when i have to make an array of values (or anything else that’s repetitive) and am too lazy to do it myself. Just a slight speedup in work.

I use it at work but gladly tell the boss... It's only pluses if we can do more trivial work faster. More time to relax. They don't watch what I do during the day. The boss relaxes also. All good.

I use it to speed up writing scripts on occasion, while attempting to abstract out any possibly confidential data.

I'm still fairly sure it's not allowed, however. But considering it would be easy to trace API calls and I haven't been approached yet, I'm assuming no one really cares.

i have used to to do simple shell scripts - like, "read a text file, parse out a semver, increment the minor version, set the last value to zero, write back out to the text file". simple stuff that can be easily stated it's pretty good at. mind you it was a bit wrong and i had to fix it, but it saved me googling commands and writing the script myself. I wouldn't have bothered normally but i do that once every two weeks so it's nice to just have a command to do it.

I dont see any reason to not use it to (keyword) help with your work. I think it would be wise to not use its responses verbatim, as well as to fact-check anything that it gives you. Additionally, turn off chat history and do not enter any details about yourself, or your employer, into the prompts. Keep things generic whenever you can.

I've used it a couple times to draft reports, most of what it writes is pretty garbage but it's good for generating general filter sentences and structure and stuff that I don't want to waste the time thinking about.

I've also used it to generate Facebook posts, it's awesome at this however recently I've had to make a point to telling it not to include emojis or the posts get overloaded

My boss pays for it! I don't use it that much, but it's pretty useful from time to time instead of going through a bunch of unrelated Google results.

I tried it once or twice and it worked well. It's too stupid now to be worth the attempt. The amount of time spent fixing its mistakes has resulted in net zero time savings.

Definately. ChatGPT for coding help, and learning new coding topics. And Gamma for presentations - if only for the nice formatting of content and stock imagery.

I don't have much use for confident-sounding nonsense.

I absolutely kept it from my boss. The she told me in a 1:1 how extensively she uses it. I was like, hey I can help! Definitely haven’t told my VP though. Also then they blocked it, so I have to either use it in my iPad, or stick to Bard and BingAI on the laptop.

As a manager, it does a great job of writing a bunch of ideas around a subject I need to explain that is not proprietary info. Turned writing a proposal that would have taken me hours to layout and format into just a few seconds with mere minutes tweaking to get just right.

Proudly told my coworker about experiment with LLM to help with documentation we're pretty close from what we would need. I don't have yet the paygrade to do my own experiment on my work time but I am close enough to be able to start experimenting on my work time and tell my boss you see this is why I desserve that paygrade

I suffer from the curse of the blank page, so getting something on the page to edit and expand is a lifesaver for me. It is also useful to adjust tone, and do simple things like document functions. Easy to correct if wrong.

We use it liberally but we are encouraged to do so.

I run a board game store, so just for a chuckle I asked it about what's popular this year or what to order and kept getting the same answer about only having accurate data from 2021 and prior.

Yeah I use it, but only as a rubber duckie. I never put in code unless I understand what it's doing, and most of the time I'm just using it as a sounding board. Since it never returns the right code on the first try anyways haha

It's great at directing and narrowing your search, and when it knows, it does a great job. Problem is when it doesn't know it just makes shit up. I was using it earlier today to debug some error messages and it just came up with some non existent cli parameters. You still need to know what you're doing and test everything first.

I use it and encourage my staff and other departments to use it.

I feel that we're at a horse vs tractor or human computer vs digital computer event. In the next 10+ years those who are AI ignorant will be under employed or unemployed. Get it now and learn to use it as a force multiplier just like tractors and digital computers were.

The arguments against AI eerily mirror the arguments against tractors and digital computers.

2 more...

I tell everyone! I suggest my coworkers and bosses to do the same.

Why I should keep it as secret?

I’m a family doctor, so I haven’t yet. It’s not a validated tool to source medical information, and I can’t paste any patient identifiers into it, so even if I wanted its input it’s way faster to just use my standard medical resources.

Our EMR plans to do some testing later this year for generative AI in areas that don’t have to be medically validated like notes to patients. I will likely sign up to pilot it if that option is offered.

I use it for D&D, though, along with a mixture of other tools, random generators, and my own homebrew. My players are aware of this.

I've used it in a few occasions, mostly to find better terms and adjusting the tone for my emails. Also finding what acronym stands for and understanding technical issues. Asking to explain like I'm a 5 yo or beginner saved me some time from doing long researches on google.

I've used ai in general a few years ago as a companion till for writing seo optimized articles. It was ok at that time, and would do maybe 30% of the work I needed, but I would still have to go back in and make major edits or it would only pop out a sentence at a time so I would be contently prompting it.

My wife is a full time writer for a company and she uses it all the time to create emails and speeches. She says the leaks and bounds in actual usability is pretty insane. Like, one prompt can give her an entire speech.

Personally I prefer quality over quantity so I don’t use it

Used in small doses to generate text with some degree of precision is helpful. I do find it to be a good way to cut out boring email writing. But I would recommend it more as a text generation tool than a fact generation tool. With the right expectations and work flow it fits right in. And no I don't consider it plagiarism if the client's demand is boring.