4 Comments
User's avatar
The biggest man's avatar

You’re not wrong at all

I feel the creators of this technology have really distorted the perception through how they talk about it. It’s call generative not refining AI.

It also doesn’t help that this tech is so underbaked at the moment.

What I want is a gammerly or an auto correct System built into word that show sentence and paragraph suggestions in context allowing to choose by sentence or word.

Right now it feels like if I wanted to use AI to simplify my at the best quality I’d need to copy chunks, too small to have appropriate context, into chat GPT4 (paid). Getting a text with parts I do and don’t like and then having to poorly stitch the original and the simplification together. This will change but right now specialised tools are based off older or smaller models and the best AI models that have a large contextual buffer are too general and unspecialised to be used in this way.

Even I find a way around this because it’s not hard to do. There is still a level of uncertainty right now. We’re not at a point of acceptance yet so if I use it.

it’s detected will I get in trouble? Probably not but we’re not at the point where the answer is just no.

And more personally I still feel most AI isn’t morale stealing the work of artists without credit, payment or permission. While we exist in an era where we can choose not to use them, I will. Much like sharing all our data and personal information with tech companies isn’t an option anyone because they’re too big and integrated into society; there will be a time where I’ll have no choice.

Expand full comment
Student 01's avatar

I couldn’t agree more with this post. The utter lack of teaching how to explore these new tools is sad. It leaves those curious enough to experiment with LLMs to maybe have a better chance in the future because it’s a TOOL. Not a replacement for a human. When I hear people disregard AI use or experimentation with it, I think of people who denied using the internet, computers and typewriters. Just look at the ones who embraced those things first...

The way people work is evolving, and if you don't adapt, you'll be left behind. As someone in your cohort, I hope you noticed with 1 of the 39 essays that there weren’t many your|, |you’re|, |theres| and |their’s|. Maybe because of some LLM use, who knows?

Expand full comment
Gyorgy Laszlo's avatar

RE when you write that "The utter lack of teaching how to explore these new tools is sad", I'd say: no, it isn't. Not really. A year ago no one knew about LLMs. Now we do. We teach ourselves. Don't ever wait for an institution to give you permission to learn.

Expand full comment
Student 01's avatar

What I meant by this was. For the most part, it feels like we get no acknowledgement of the potential benefits of these new technologies (in education) because “the use of LLMs threaten the certification function of Higher Education Institutions”, as you have said.

I very much feel that rather than neglecting the LLMs and various new technologies because they may threaten what you do is a big problem. I’ve said why, but I mostly feel like those who are not curious enough about new technologies or listen to their higher-ups saying to stay away will eventually cause a big divide between those who know and those who don’t.

However, I agree with the biggest man when they said, “Most AI isn’t morale stealing the work of artists without credit, payment or permission.” This is a big moral problem: when using any LLM, you are working off models trained on uncredited work.

Expand full comment