#SSP0142-T-042922-I-DON'T-UNDERSTAND-THIS
SENDING A LETTER
APRIL 23 2022
FEEDBACK:
I’m not sure I really understand what you’re doing … but it feels like you’re using your system as a kind of “prompt” to spin some sort of story.
So it’s a “device” of a sort – but I can’t figure out why you make the correlations that you do. That’s why I say it feels more like a kind of “prompt”.
Here’s a simple one: you could have a set of coins, and on each side you could have some with verbs – some with nouns – some with adjectives – and then toss them. There can be no doubt that as you toss the coins, various interesting patterns will come up that can “prompt” you to spin an interesting yarn. But it’s NOT the coin speaking. It would really be YOU speaking.
RESPONSE:
It isn't a randomizer as far as I can tell at this point. So far it appears to be a set order, or a shape. I call it a "device" but that might not be the best word for it as nothing moves on it. ....Yet... So there is no randomness aspect to it, except perhaps its existence itself being randomly chosen from all the other shapes that exist. It wasn't. Nevertheless, this unchanging shape, I am hypothesizing, has a universal impact on our language. So it seems to be a piece of unknown framework to linguistics that, if proven (tsadi-TS) and qualified (kof-Q) via peer (pe-P) review (resh-R), could then be used by all to enhance our study of linguistics.
FEEDBACK:
The most sophisticated mechanical prompting devices fall in the domain of AI and GPT-3. These are used to correct spelling … but easy to extend so that it can guess the next word you might be wanting to write – given the prior sentences.
The auto-complete feature is quite interesting in the context of email messages going back and forth – say – between people having a conversation. These can be extended to suggest not just one word, but two words – and why not three – and why not a whole sentence!? Indeed, that’s exactly what some writers have discovered – and use GPT-3 as an elaborate prompting device to generate interesting stories.
These things are ultimately nothing but elaborate neural networks trained on billions of sampled text. Some have created videos where they seem to “speak” and have a conversation with an AI. It’s an interesting illusion in that the replies are grammatically flawless – and often make perfect sense – at least insofar as it sounds like what a person would say.
Even more interesting is prompting two AI robots to start chatting with each other by just starting them off – even with just a random sentence. Then one responds – and the other responds to that – and back and forth it goes. But it’s ultimately a probabilistic machine underneath it all with a massive collection of data. It gives the illusion that there is actually someone there – a person – who understands.
But consider the following bit of dialogue: ME was an actual person testing this out. LaMDA is an AI robot:
This is where it gets interesting. The human is building up a scenario – and seeing what the AI robot can deduce about the NEW scenario – and predicts what is likely to happen under a certain condition:
Again – LaMDA is just an AI robot. But it’s put 2 and 2 together and makes a human-like prediction about how Alice is likely to react!
RESPONSE:
Like this perhaps?
I suspect that if my tool works it will push AI leaps and bounds forward in the area of neural networks. Each node of meaning is one node on a neural network. How many nodes are people using to teach AI meaning? I suspect a lot. No idea just guessing that current algorithms might use entire dictionaries worth of words as their nodes. I am suggesting that the total number of necessary nodes is only 23. Because I have basically taken (in theory-allegedly) every possible category of meaning and reduced it to 24. Then, from the 24 nodes of meaning we can create 1! 2! 3! 4!...23! meaning combinations, which we call words. Imagine how this would aid AI research if true. But I am only theorizing and pie-in-the-skying because at the moment I only see possibilities, I am still looking for a NO answer, or a Yes. A peer review.
APRIL 29 2022