Tuesday, June 13, 2023

Theory of mind - TOPCloud

[Theory of mind - TOPCloud +2021-03 RS]


Theory of mind : LLM:ML & us : RS

Theory of mind : Clearly the Problem Sort Tree & Theory of mind; But also of the industrial age+(stone)
LLM - Large Language Models as tool makers

https://www.youtube.com/watch?v=qWI1AJ2nSDY

To sum the content directly within the Layers of TOPCloud..

Work Unit Cost Average = {

Work Blocks : Work Unit Allocations per task

WATTS,
TIME,
Effort,
Accuracy,

}


Basic LLM-Hive={

LLM : The Mind or Hive:

Direct knowledge gathering,
Basic Tool use : MathML, PyMath, OpenCL, Programming

Tool making is the stone age step
Tool on tool is Industrial

Too Big To Fail ;-)

}

Rupert S

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

https://science.n-helix.com/2022/10/ml.html

https://science.n-helix.com/2023/06/tops.html

https://science.n-helix.com/2022/08/jit-dongle.html
https://science.n-helix.com/2022/06/jit-compiler.html


***********

Fully Autonomous NPC : Research Paper - https://arxiv.org/pdf/2304.03442.pdf

Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation https://arxiv.org/pdf/2107.13545.pdf

TOPCloud Heuristic Machine Learning
https://is.gd/LEDSource

Fully Autonomous NPCs - Putting "Open World" To Shame (ChatGPT-Powered) : TOPCloud

https://www.youtube.com/watch?v=Se6KFn1Nni4

Autonomous agents are:

Angels In Disguise : Secret underflow missions, Emotive resonances & Shared information such as local logs,
How well do 'Autonomous NPC' handle information about & from others ?

Repeating? Common goals become daily tasks, Heuristics is like this! in Rogue

Expressive? Allow interactions from such functions as Educators & News channels that they watch...
Treat some of the content like dreams or surreal interactions & narrative...
Not all content is believed; Not all dreams were unreal...

Some are made; Some fall!

Life is a function & has mechanics! Not all events suffer from a proof that nothing is or was programmed...

TOPCloud; Outside influences & Larger pools of experience such as dream; role play; Interactions; Moving home to another 'persons device' (Such as Beijing)

Interaction creation & perfection; TOP Cloud.

Example Material : TOPCloud Text Translate & Associate

Soule
https://www.youtube.com/watch?v=KBqPIcQV3hk
https://www.youtube.com/watch?v=ICGuGONrNzk
https://www.youtube.com/watch?v=UorRxnx-dsw

************

TOP BOOSTER Cloud Enemy(tm) Provided by potentially DLSS Cloud Founder :


*
TOPCloud & BlueTooth & Device : Localised & Cloud Computing JITCompiler:

I have to be specific about TOPCloud & BlueTooth; Due to data bandwidth constraints..
Phone & Device direct provision of Computing power to Bluetooth devices is hard!

Bandwidth is often only 250Kb & that is including the Codec data such as SBC & LCPlus & HE AAC 3D Audio!
So we need to save data & also Compute! Presenting TOPCloud..

TOPCloud provides ML TOPS & Computing power to devices through Protocols known as the JITCompiler GPU RTP/RDP Device-Chain Stack

https://science.n-helix.com/2022/06/jit-compiler.html
https://science.n-helix.com/2022/10/ml.html

RS
*

We cannot all Buy a founders GPU But we can all use your Founders Edition low price Cloud plugin for MMO & Online activated play Gaming :
Cloud Enemy(tm) - TENSOR CORE + TOPS + We cannot all buy your cloud GPU Founders edition...

for reasons that AMD & NVidia and ARM & Intel do not directly buy a RTX3080TI Founders edition :p ^^ but we can all use your :

Cloud Enemy(tm):(c)RS TENSOR CORE : All GPU of note have TOPS and obviously we all specialise <3

My proposal is simple : All special console MMO need a 370 Tensor core server side :

Enemy, Friend,Pet, Emoti play(tm)

(read at the bottom of the post please, Bear in mind this does not mean NVidia is the best at RayTracing..
But it does mean we can truly afford to activate the full benefits of having ML TOPS..
Mobile phones often only have 4 TOPS or even 2! at the most 10 and specialists like IPhone 20>30

But could all afford a small compliment to the Founders Cloud in that ML is dealt with for the entire MMO by the cloud; That way no one needs to know that ..

MLT_RTP:RS
Machine Learning TOPS RTP Is a protocol specifically for the Mapping & implementation of AI
Upscale your machine parameters with living system ML

Packets are intended to be between 15KB & 1MB light load over 1 minute
256KB to 4MB load over 1 minute..
Containing pre mapped dynamic logic & operations procedure calls that enhance for example:

Game environment
Game AI
Robot logic
Driver logic
NPC Logic

Research & Logistics
Mapping & Terrain
Radar & Drive By Wire
Traffic control & routing
Landing & takeoff

GPU RTP (Complex 3D RTP, Simple message, local cache, Monster cloud render + local)(c)RS
Exists specifically for You the client:

NVidia
Microsoft..
Google
Apple
AMD
Cloud gaming and service providers

Linux VM
Windows VM
Mac VM

Cloud Machine learning at GPU specialist clouds is of very high potency & potential,
But for a 1$ a week subscription game like Quake? very hard at large cost!

(c)Rupert S https://science.n-helix.com

Cloud Enemy(tm)

Core strategic advice & adaptable SVM CPU <> GPU

SVM/Int List:
Hard mode: Smaller refinement
Advance Hard mode: Micro model save, Micro model regression

Advance BattleMode: Hard mode: Micro model save, Varied challenge (small regression),Indirect reference chat
Advance BattleMode: Hard mode: Micro model save, Varied challenge (small regression),Indirect reference chat,Personal chat
Advance BattleMode: Hard mode:RND resurgence, Micro model save, Varied challenge (small regression),Indirect reference chat,Personal chat

Machine learning,
The Advanced SVM feature Set & Development

CPU lead Advanced SVM/ML potential
GPU refinement & memory Expansion/Expression/Development

SVM/ML Logic for:
Shaders,
Tessellation,
Compression,
PML Vector Ray-Tracing

(c)RS

Raising TOP's is JIT OpenCL

The main process of internally Raising TOP's is JIT OpenCL

https://science.n-helix.com/2022/08/jit-dongle.html
https://science.n-helix.com/2022/06/jit-compiler.html

*
ML_RTP chain events:

TPU Main Machine Learning NPC,

Micro Enactor Scripts and ML (GPU Server Side)

Local Micro Enactor Scripts and ML (Client GPU Side)
*

The concept is to share processing work further down or up the chain:
Display to GPU & then CPU & USB,

If there is a USB JIT Dongle such as compute stick that is in the Monitor USB or in a USB Dock on the HDMI/DisplayPort Cable; Then the JIT Compiler will handle OpenCL work units called Kernels...

The ML RTP protocol sends work packets to servers; Traditionally in online games Scripts run on the server,

MLT_RTP adds depth because the server can run Machine Learning Workloads such as OpenCL JIT & procedural calls to run mobs & pets..

The main process is to have the local computer or device such as phone running small Machine Task interpreters; MTI are small machine learning routines that run through script's & diagnose problems with it..

For example MOBS/Allies run into walls; With higher latency localised JIT Compiler Tasks can run the MOB/Ally Locally & not have to download from server so frequently..

So we reduce latency but can still check the Mob/Ally is doing something we want & is not exploited.
We can run 10 Seconds of commands locally; For example on a localised node in Europe while the game runs in Japan...

We can execute the thought processes of the Ally/Mob on the powerful TPU / Tensor Cores / Server F16..

Individually scripting motions for all characters on another node; As in the Physics, Motions & Animations!
TPU are not known for GPU Render capacity & Nodes with both TPU & GPU would be pricey!

But we chain events:

TPU Main Machine Learning NPC,

Micro Enactor Scripts and ML (GPU Server Side)

Local Micro Enactor Scripts and ML (Client GPU Side)

(c)RS

*

Low Latency ALLM Direct Render : GPU RTP & GPU RDP Protocols..
Specifically designed with GPU & Display Connections, Transport & presentation with..

JIT Compiler
https://science.n-helix.com/2022/08/jit-dongle.html
https://science.n-helix.com/2022/06/jit-compiler.html

Compressed Render VECSR https://science.n-helix.com/2022/04/vecsr.html
https://science.n-helix.com/2023/02/smart-compression.html
https://is.gd/LEDSource

*

TOP Cloud Basics for personal help AI


Machine learning from the direction of Alexa, Cortana, Siri, Bard

Local processing requires RAM & processor time? Yes

So we have a planed local process:

2 MB Ram
700 Cross references on topic you ask...
300 Language response process
Optimised server access; Point of view is to isolate the connection below 2Mb/s (Probably 50Kb per response)

Local library of common topics for you!
Local list of items you like; Song types you prefer; Your personal preference over 30 minutes

Local data matrix is optimised for you..

Most tasks are carried out local first,
As you see most requests require less thought & are already optimised for uploading & downloading...

Question is; how much server do we need ? & how personal is it?

Uses of TOP Cloud : Disabled People Basics:
TOP Cloud, is purely the best for all efficient visual & audio damaged people,
Can provide heuristics that allow colour blind people to see what they need!
​Can do many things with a very small bit of time on large TPU & GPU, potentially in 1 Second for many people,

Heuristics is all that we need after logic; & we can filter a video with a colour sensitive persons visual range; basic example & with a single WebASM or WebGPU colour layer; Very low CPU use they See..

Red, Green, Blue; Enhance or tint..
Single colour layer WebGPU, Shader, WebASM.. not actually on the video!
Additive tint.. As to enhance the colour or indicate with another a slight amount truecolour...

"I see your true colours shining through" TOPCloud

RS

TOPCloud Offload Logic:


In terms of WebASM & WebGPU & MathML; TOPCloud provides sufficient advantages to be considered a core utility..

While Offloading repeating content such as Siteload core stack (Server) & Localising configuration such as Webpage size & DPI & Dynamic font arrangements that require thought.

In terms of Offloaded function & Efficient system load for large configurations..

Especially efficient configurations such as TPU, Coral, GPU work & Cloud CPU that have large optimised stacks & installed drivers.

RS

#Doctors #HuristicLists #CommonMedicalAdvisory #WebMD #CommonPerscriptionAuditingAdvice #Doctors I do not always know where to go!

#HeuristicList

#MD
#TOPCloud
#CommonResource
#DiscreteCosign
#Doctors
#HuristicLists
#CommonMedicalAdvisory
#WebMD
#CommonPerscriptionAuditingAdvice
#InfogramaticSortLists
#CommonErrorsTipNotes
#SugestedStaffLevels
#NonObligatoryMandate

https://is.gd/LEDSource

Rupert S

*

#TheTOPCloudEdit (c)RS : Principle of data saving non localised Machine aided design & workflow (c)RS


We really have to think about all the offloading strategies we can; Our network & storage footprint should be minimal..

To name the philosophy completely we need to start with our most compressible assets!*

Very high precision Float operations
high complexity offloaded ML
Long term strategies; Minutes to hours!

Basic operation to offload are complex ones..

We need multiple shape cuts in a single pass; Preferably vectors!
But those shapes shall be multiple factor complexity!

The offloading of simple operations with KB of image or file per operation has higher latency & bandwidth!
Complex operations also may require that the HPC configuration has the image, video or data..
But we DO NOT Want to transfer GB/s data on presumption if we do not need to!

So our primary source of TOPS performance; Is complexity operations; We no not firstly offload the image, Video, Texture, Complex Vector upload... If we are avoiding that?

But we DO Offload:

Vector lists
Sort lists
Memory optimisation lists
Khronos Compressed Vector files
Complex Math rotations & motions
Complex Vectors (in the sense of motion)
Elliptic Curves & SVM Maths
Multiple Dimensional Vector Arrays
Multi point paths & video & 3D Path tracing pre computations

The principle is precision, Because what we do with a Photoshop is map a topography, our 3D Space with a complex compressed interpretation that our Facebook Codec can compose into an image edit

We do the same Topography with cancer cutting surgical equipment, We need a precise CUT but our robot is 32Bit!

Due to complexity we need a larger float value! (example value, Many values exist that we need & Armstrong knows that on Saturn voyage 13)

TOPClouds non local edit is an example where the function; for example of the Alexa music player...
Is not to send all the data; We Help our local computer think; the same way as a teacher; gives a formula!
We do not need to know the Pythagoras value in full; But our operation may require it!

We do not just need examples of Pi; We need examples of polynomial shapes, Vectors, Concepts & designs, Requiring less data sent & received than the work total cost of transfer to a trained massive network

https://www.youtube.com/watch?v=9ykRV2OMPbE

Rupert S

*

#Sound Strategy game TOPCloud (c)RS


PCM & MP4 are 2D/3D Image so GPU Helps there also with 3D Audio mapping!
Games do not require cloud processing of images & a lot of local strategies are procedural Heuristic

You see RDP has GPU Connect (my innovation i might add) So Bluetooth & Wifi can connect RTP GPU; The port specifics are not particularly important; However a device such as music streamer can have ML TOP's available locally & from the cloud,

Due to how the TOPCloud strategy works with localised ML TOPS; Not all data has to be sent or received..
For example all Audio 3D Profiles for HQ Room audio can be done within a few MB of data; With some hard work? 150Kb of data & so in reach of phones & mobile!

Gaming is an example here. I give TickTackToe as the example where all that a device like Alexa or Google smart device has to think is Which square? but..

No physical picture needs to be sent for the game to be played & if required a small TickTack Strategy ML is desired locally for a quicker response!

You see with a low latency GPU RTP & GPU RDP connection to cloud GPU; Most localised thinking TOPS can be carried out in Seconds if not milliseconds & PCM & MP4 are 2D/3D Image so GPU Helps there also with 3D Audio mapping!

Rupert S

*

Core features of TOPCloud:


RTP ML TOPS are a processors friend

3D audio mapping & spatialization for realistic sound effects
3D Vector Support for various audio formats such as PCM, MP4, OGG, and WAV

Low latency & high bandwidth connection to cloud GPU servers via RDP

Procedural & heuristic algorithms for generating game scenarios & strategies & 3D Audio & Visuals
Localized & cloud-based machine learning models for optimizing game performance & user experience

RTP GPU Connect technology that allows users to access GPU resources from any device with Bluetooth or WiFi

TOPCloud is a revolutionary 'TOPS' way to enjoy & create audio games using your own music & the power of the cloud. Try it today & discover a new dimension of gaming!

https://science.n-helix.com/2022/10/ml.html
https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

https://science.n-helix.com/2022/08/jit-dongle.html
https://science.n-helix.com/2022/06/jit-compiler.html

https://science.n-helix.com/2023/02/smart-compression.html

*

Scaling; We can classify by colour or creativity. (c)RS


If you use TOPCloud, you can share between different displays in the TOP's Sense..
but mostly you would need cloud presence,

Mostly this would be about making the most out of TOP heavy Business GPU & personal ones in your computer or consoles.

But sharing common tasks such as scaling movies by type or by identifying a single movie to upscale...

Now you might be asking what we would be doing there?
Well a single movie uses the same materials in our ML; We can analyse the class & optimise the scaling by class..

For those familiar with games & FSR; We familiarise our code with a single game!
By doing this we improve our product and can therefore classify by:

Resolution
Style
Speed
Type, FPS for example & RTS

We can classify by colour or creativity...

We do not simply have to roll the dice on General Scaling, We can use classifiers:

Title
Scale
Type
Speed
Frame Rate
Colour & Composure

PrePlanning
With the help of #TheTOPCloudEdit & F16 + DOT4 Classification commitments to:

Larger Tables Interpolated & Optimised

Pre planning & Optimisation LUT Mapping,
Colour & Dynamic range,
Dynamic frame rate control & adaptation

Sound Dynamic Range,
Dynamic Volume
Virtual 3D Space

Channel balancing; Before you ask:
Smoothing the 3D Range over each Speaker & Combined audio space,
Space mapping, Head averages such as size & width, Ears & Room Size

Rupert S

Agents
https://www.youtube.com/watch?v=Se6KFn1Nni4
https://www.youtube.com/watch?v=DxxAwDHgQhE

https://science.n-helix.com/2021/10/eccd-vr-3datmos-enhanced-codec.html
https://science.n-helix.com/2023/06/tops.html

Rupert S

*

LUT Table Example {TOPCloud & TOPCloud Edit}


The significance of LUT Tables; Colour conversion ICC; Is fundamental to how good a monitor or TV Image looks,

But we need to assume that most TV's & Monitors do not have a suitably RAM Loaded GPU;

ICC can by themselves take MB of RAM to load & Upto 256MB of conversion Table!
TOPCloud & TOPCloud Edit allow for parameter offloading,

The basic assumption for offloading is that there is no advantage to offloading a LUT Table to the local GPU?

However TOPCloud allows for 3 fundamentally Simple Concepts to be in play,

Firstly the use of OpenCL JITCompiler to procedurally unfold & map all LUT Mappings,

2 You can remap to different hardware using the Hardware Abstraction Layer; Well in fact JITCompiler makes running the command low latency & super easy!

3 You can even offload to cloud (same town for example Cloudflare),

RS

****************

Basic Upscaling Kernel Starter Set, Contains a basic set of what we hope to achieve.
Learning from proverb; Future Productions inc

OpenCL Kernel Builder
https://drive.google.com/file/d/1d_bWbZl9fAZXsLbN_jZdqSxdWzraLSIz/view?usp=share_link

Texture Encode Source
https://drive.google.com/file/d/1udWU4slmZkUGcagcJl1KwFWh5FJ5ScoN/view?usp=sharing

FSR Scaler
https://drive.google.com/file/d/1D27MOBYKVkKib1JzP_eFucp8RRrzAhd6/view?usp=share_link

Python ML Image denoisers, Very heavy denoising
https://github.com/cszn/BSRGAN
https://github.com/cszn/SCUNet

Crucial Codec source for projects
H266 https://drive.google.com/file/d/1Zt0CrP5p8ld7xnki1B9X4wz6Opyv13aH/view?usp=share_link
AV1 https://drive.google.com/file/d/179pqqS36v--t_BDjyhe1x_oVeYuxkWBw/view?usp=share_link
AAC https://drive.google.com/file/d/1YJy1yAdmEdjSMhtUjvTEU-y9HqJXFzzN/view?usp=share_link
LC3 https://drive.google.com/file/d/1_Gnf_PLN81YepCugmaRNofib7zLOHBNO/view?usp=share_link
DSC https://drive.google.com/file/d/1hbTFsFqzQTqLbhOaEwY-QkM4y3uAglXX/view?usp=share_link

X86Features-Emu
https://drive.google.com/file/d/15vXBPLaU9W4ul7lmHZsw1dwVPe3lo-jK/view?usp=usp=sharing

PoCL Source & Code
https://is.gd/LEDSource

Linux HPC Node install
https://is.gd/LinuxHPCNode

https://github.com/GPUOpen-LibrariesAndSDKs/RadeonML
https://github.com/GPUOpen-LibrariesAndSDKs/RadeonImageFilter

https://science.n-helix.com/2022/10/ml.html

To Compress using CPU/GPU: MS-OpenCL
https://is.gd/MS_OpenCL
https://is.gd/OpenCL4X64
https://is.gd/OpenCL4ARM

Upscale DL

PoCL
https://drive.google.com/file/d/1Cvq9uQlEedwIXaJEMoD_r4lvOXgCy-Ld/view?usp=drive_link

X86Features-Emu
https://drive.google.com/file/d/1iDW0HcpOoJqaSkuZGpHKJfKrI1H68diU/view?usp=sharing

*
https://github.com/ssube/diffusers/tree/feature/onnx-upscale

https://github.com/huggingface/diffusers
https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx

https://huggingface.co/uwg/upscaler/tree/main
https://huggingface.co/nvmmonkey/optimal_upscale/tree/main
https://huggingface.co/gmp-dev/gmp-upscaler/tree/main/ESRGAN

Neural Engine
https://github.com/godly-devotion/MochiDiffusion

*

PysicsX
Isaac Gym - Preview Release
https://developer.nvidia.com/isaac-gym

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters
https://github.com/NVlabs/CALM

*

Personality UI : Have a friend

Alpaca Character Generation model
4Bit for speed, But not precise
https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g
trained 3Epoc Higher Precision https://huggingface.co/chavinlo/gpt4-x-alpaca

Base model https://huggingface.co/chavinlo/alpaca-13b
https://github.com/teknium1/GPTeacher

Python WebUI
https://github.com/oobabooga/text-generation-webui
Mac; Mostly MAC but fast
https://github.com/ggerganov/llama.cpp

how to use & personality sets https://discord.com/invite/aitrepreneur-1018992679893340160

On the subject of how deep a personality of 4Bit, 8Bit, 16Bit is reference:
https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html
https://science.n-helix.com/2022/10/ml.html
https://science.n-helix.com/2023/06/tops.html

No comments: