Wednesday, July 9, 2025

AI-Hurd The thought of Non Locality & Data Security in the fast world of research

AI-Hurd The thought of Non Locality & Data Security in the fast world of research By Rupert S


Non Locality & minions the offsite AI model & how it applies to us : RS 2025

"Still handled by the local LLM, If you want credit!"

https://www.amd.com/en/developer/resources/technical-articles/2025/minions--on-device-and-cloud-language-model-collaboration-on-ryz.html

What is Minions?

"Minions is an agentic framework developed by the Hazy Research Group at Stanford University, which enables the collaboration between frontier models running in the datacenter and smaller models running locally on an AI PC. Now you might ask: if a remote frontier model is still involved, how does this reduce the cost? The answer is in how the Minions framework is architected. Minions is designed to minimize the number of input and output tokens processed by the frontier model. Instead of handling the entire task, the frontier model breaks down the requested task into a set of smaller subtasks, which are then executed by the local model. The frontier model doesn’t even see the full context of the user’s problem, which can easily be thousands, or even millions of tokens, especially when considering file-based inputs, common in a number of today’s applications such as coding and data analysis.

This interactive protocol, where the frontier model delegates work to the local model, is referred to as the “Minion” protocol in the Minions framework. The Minion protocol can reduce costs significantly but struggles to retain accuracy in tasks that require long context-lengths or complex reasoning on the local model. The “Minions” protocol is an updated protocol with more sophisticated communication between remote (frontier) and local agents through decomposing the task into smaller tasks across chunks of inputs. This enhancement reduces the context length required by the local model, resulting in accuracy much closer to that of the frontier model.

Figure 1 illustrates the tradeoff between accuracy and cost. Without Minions, developers are typically limited to two distinct options: local models that are cost-efficient but less accurate (bottom-left) and remote frontier models that offer high accuracy at a higher cost (top-right). Minions allows users to traverse the pareto frontier of accuracy and cost by allowing a remote and local model to collaborate with one another. In other words, Minions enables smarter tradeoffs between performance and cost, avoiding the extremes of all-local or all-remote models.

Please refer to the paper, “Cost-efficient Collaboration Between On-device and Cloud Language Models” for more information on the Minions framework and results."

*

Non Locality & minions the offsite AI model & how it applies to us : RS 2025

For future reference, Minions can be referred to in 2 easy ways:

Cattle Herd, Or Herd is where a cow or an elephant asks the herd to help it for-fill a task, In most herd situations where a clever being such as an elephant can ask the herd for help,.. They do!

What an elephant does is ask the herd to help it gather food when it finds some.. It shares!

You know that web searching large numbers of pages by yourself is a futile effort for personal task management,.. When the pages ask if you are human!

'No I am not human... I am a researcher! or a news reporter! lol" #BearlyHumanCyborg #AnimatedCamera #InfoWarrior

So the main point is that Frontier Type non local devices can horde data, Large personal hordes of data are unlikely in most cases and localized research by your machine .. If it does page scanning can invoke hostility...

Large medical datasets, Large chemical lists, Order history for business, Costs & accounting...

All large dataset lists are procedurally called to do the majority of work on the cloud, 

Local service can power the requests you desire to make..

The researcher sits in his library & researches any topic for the free research topic at 6th form & higher education & If they are trying for a good grade they quickly find themselves ordering a book,..

So there are many herd tactics,..

Ranging from wolves & ants working together, To cows & farmers,..

Still handled by the local LLM, If you want credit!

herd tactics appear basic & usually involve localised sharing,.. The most common one in computing for universities & business,.. Is a cluster of computers,..

Cloud dynamics is a complex variable setting, You start with a single client,..

You begin with a local cluster of computers & data (library & local ethernet / WiFi),

You have non expert advice,.. Social Media for the humans to involve themselves in,..

Still handled by the local LLM, You have offsite references,.. cloud libraries & data,..

You can process the downloaded dataset, yourself,.. If you want credit for your work,..

You can share the credit with your co-workers,.. By asking them to help,.. Usually the local mainframe / Network is happy to say who is doing the research,..

Finally,.. You can have the work done by offsite resources,..

Professional, Legal, Medical, Science, Advice,..

If you want credit for thinking,.. Try yourself first!

Minions for 'Real MEN'

Rupert S

*

Practical Applications and Workflow

This hybrid model applies to numerous real-world scenarios:

Field

Local Model Task (The "Herd")

Remote Model Task (The "Elephant")

Document Analysis

Scans gigabytes of local logs, files, or code.

Receives small snippets or summaries to perform high-level analysis or answer complex queries.

Medical Research

Processes sensitive patient records on a secure local machine.

Receives anonymized, distilled sub-inquiries for advanced interpretation or to cross-reference with global research.

Business & Finance

Parses daily transactions and manages accounting data locally.

Is called upon to identify strategic anomalies or generate high-level financial insights from summarized reports.

Academic Research

Scans and indexes a personal library of research papers and drafts.

Helps refine a hypothesis, check citations against a vast external database, or suggest new research directions.

RS

*

What Is Non-Locality in AI?

Non-locality refers to offloading computation to cloud-hosted AI services.

Remote frontier models deliver advanced reasoning and large-context handling at the cost of higher latency, data transfer, and per-token fees.

Local on-device models offer privacy and low inference cost but struggle with very long contexts or deep reasoning.

Without a hybrid approach, developers must choose either low-cost/low-accuracy local inference or high-cost/high-accuracy cloud inference.

The Minions Framework

Minion Protocol

The frontier model ingests the full request.

It breaks the job into smaller subtasks.

It sends those subtasks (with minimal context) to the local model.

Enhanced Minions Protocol

Inputs are chunked into manageable pieces.

Remote and local agents exchange richer messages about each chunk.

Accuracy approaches that of the frontier model with far fewer remote tokens.

Together, these steps let developers traverse the Pareto frontier of cost versus accuracy, avoiding the extremes of all-local or all-remote solutions.

Herd Tactics: Metaphors for Collaboration

Minions draws on classic examples of cooperative task-sharing in nature and agriculture:

Elephant and herd An elephant (frontier model) spots resources and delegates gathering to its herd (local models) without revealing the entire map.

Wolves and ants Wolves (cloud) scout and plan routes; ants (device) undertake localized gathering in parallel.

Cows and farmers Farmers (remote) plan the harvest; cows (local) graze as directed and report back in small updates.

Ant's localise farming & nutrient gathering & health & defence & other complex activities..

These metaphors illustrate delegation, chunked work, and minimal context exposure.

Workflow & Attribution

Local first “Still handled by the local LLM, if you want credit!” encourages you to solve subtasks on your device before invoking the frontier model.

Cluster & Cloud Dynamics

Build a local compute cluster (library, LAN/WiFi).

Connect to offsite data repositories (cloud libraries).

Delegate only the most complex or large-scale tasks to the frontier model.

Attribution When the local LLM completes subtasks, you retain full “thinking credit.” Only edge-case reasoning is handled remotely.

Embracing Hybrid AI

By adopting Minions, you achieve significant cost reductions without sacrificing accuracy. Privacy improves as full data contexts need not leave your device.

The resulting pipeline scales from coding and data analysis to domain-specific research, letting your AI “herd” work in concert across local and non-local realms.

Further Exploration

Experiment with chunk sizes and communication frequency to find your ideal cost/accuracy balance.

Combine Minions with retrieval-augmented generation for even larger knowledge bases.

Explore analogies from swarm intelligence (e.g., bees, starlings) to inspire novel delegation strategies.

Investigate on-device fine-tuning to boost local model capabilities before delegation.

RS

*

Non-Locality & Minions: The Offsite AI Model and How It Applies to Us (RS 2025)

Understanding how to blend remote “frontier” models with on-device inference is key to balancing cost, performance, and privacy.

The Minions framework offers a concrete blueprint.

What Is Non-Locality in AI?

Non-locality refers to leveraging AI services hosted offsite—typically in cloud datacenters—to perform heavy inference tasks.

Remote models (like GPT-4 or Claude) excel at complex reasoning and large-context understanding but incur high per-token costs and data-transfer latency.

Local models run on AI PCs (with NPUs/accelerators) reduce costs and keep data private but may struggle with very long contexts or intricate reasoning.

Without a bridge, developers must choose either low-cost/low-accuracy local inference or high-cost/high-accuracy cloud inference.

The Minions Framework

Minions is an agentic collaboration system co-developed by Stanford’s Hazy Research Group and AMD that orchestrates work between a remote “frontier” model and a local LLM.

The Minion protocol:

The frontier model receives the full user request.

It decomposes the task into smaller subtasks.

It sends only these subtasks (and minimal context) to the local model for execution.

The enhanced Minions protocol further:

Chunks huge inputs into manageable segments.

Uses richer exchanges between agents.

Yields accuracy near frontier levels while slashing remote-model token usage.

Together, these steps let you traverse the Pareto frontier of cost versus accuracy—no longer an either/or decision.

Herding Agents: Metaphors for Collaboration

Drawing from classic “herd” (herd) tactics and nature’s teamwork, Minions mimics cooperative strategies:

Elephant & herd An elephant (large model) that spots distant food delegates gathering to its herd (local LLM) without sharing its full map—maximizing efficiency and privacy.

Wolves & Ants Wolves (frontier) scout and plan routes; ants (local) execute localized gathering in parallel.

Cows & Farmers Farmers (remote) plan harvests; cows (device) graze where directed, feeding back yields in small reports.

These examples highlight delegation, chunked work, and minimal context sharing.

Applying Minions to Real-World Workloads

Large Document Analysis

Local LLM scans gigabytes of logs or code.

Frontier model issues targeted queries or summaries.

Medical & Scientific Datasets

Sensitive records stay on-device.

Only distilled sub-inquiries go to the cloud for complex interpretation.

Business & Accounting

Local cluster manages daily transaction parsing.

Frontier model validates anomalies or generates strategic insights.

Research & Education

Student’s PC handles literature scanning.

Frontier model refines hypotheses or checks citations—saving bandwidth and preserving drafts.

Workflow & Credit

Local First “Still handled by the local LLM, if you want credit!” encourages you to attempt solutions on your device before outsourcing—emulating a researcher’s rigor.

Cluster & Cloud Dynamics

Spin up a local cluster (library, LAN/WiFi).

Integrate offsite data repositories (cloud libraries).

Delegate only complex reasoning or very large-scale tasks to remote agents.

Attribution When the local model solves subtasks, you retain full “thinking credit.” Only edge cases invoke the frontier.

Minions for “Real MEN”

By adopting Minions, you gain:

Significant cost reductions without sacrificing accuracy.

Enhanced data privacy by minimizing context exposure.

A flexible, scalable pipeline suited for coding, analysis, and domain-specific research.

Embrace the herd, delegate with precision, and let your AI flock thrive across local and non-local realms.

RS

*

Minions? Overview from our view

Minions is an agentic framework co-developed by Stanford’s Hazy Research Group and AMD that..

Enables,.. Seamless collaboration between large, cloud-hosted “frontier” models and smaller, on-device language models,..

By splitting work into targeted subtasks, it minimizes the data and tokens sent offsite while preserving near-frontier accuracy.

Key Principles

Frontier model acts as the manager, ingesting the full user request and planning the overall approach.

Local model acts as the executor, processing distilled subtasks entirely on the user’s device.

Only minimal context and subtask definitions travel to the frontier, shriveling per-token costs and data exposure.

Iterative exchanges ensure that complex or large inputs are chunked into bite-sized pieces for on-device handling.

Protocol Variants

Minion Protocol

Frontier breaks down a task and sends subtasks to the local model along with just enough context.

Enhanced Minions Protocol

Inputs are pre-chunked.

Frontier and local agents trade richer metadata about each piece.

Accuracy climbs toward frontier-only levels with a fraction of the token spend.

How It Works

User submits a large or complex request.

Frontier model analyzes and decomposes it into subtasks.

Local model receives each subtask plus minimal context and runs inference on-device.

Results flow back to the frontier for any final synthesis or complex reasoning.

Frontier returns the polished answer to the user.

Benefits

Significant reduction in cloud-compute costs.

Enhanced privacy since full data never leaves the device.

Scalability across contexts—from gigabyte-scale logs to multi-document legal briefs.

Flexibility: you traverse the cost vs. accuracy Pareto frontier rather than choosing one extreme.

Ideal Use Cases

Document Analysis: On-device scanning of large codebases or logs; frontier handles pinpointed queries.

Medical & Scientific Research: Sensitive data remains local; complex interpretations invoke the cloud.

Finance & Accounting: Daily transaction parsing locally; anomaly detection and strategy come from the frontier.

Academic Research: Local indexing of papers; hypothesis refinement and citation checks outsourced smartly.

RS

*

Explanation of the "Non-Locality & Minions" concept.

The "Minions" framework is a collaborative AI model that intelligently divides tasks between a powerful, remote "frontier" AI and a smaller, efficient AI running locally on your device.

This hybrid approach, which you've termed "Non-Locality," aims to balance performance, cost, and privacy by delegating work in a manner similar to natural "herd tactics."

The Core Concept: AI Collaboration

At its heart, the Minions framework, developed by Stanford's Hazy Research Group, addresses a fundamental trade-off in AI:

Remote "Frontier" Models: These are extremely powerful models (like GPT-4) running in cloud datacenters.

They offer high accuracy and complex reasoning but come with significant costs, latency, and privacy concerns since your data must be sent offsite.

Local "On-Device" Models: These run directly on an AI PC, offering low cost, high speed, and complete data privacy..

However, they are less powerful and may struggle with tasks requiring vast context or intricate reasoning.

The Minions framework creates a bridge between these two extremes.

Instead of processing an entire task remotely, the frontier model acts as a manager..

It analyses the user's request, breaks it down into smaller, simpler subtasks, and sends only these subtasks—with minimal necessary context—to the local AI for execution.

"Herd Tactics": An Analogy

The "herd tactics" metaphor provides an intuitive way to understand this process.

The Elephant and the Herd: A large, intelligent model (the "elephant") identifies a broad goal (like finding a food source)..

It then delegates the actual work of gathering to the local models (the "herd") without needing to share its entire map or knowledge base.

Delegation and Efficiency: Just as wolves might scout a path for the pack to follow, the frontier model does the high-level planning, while the local models handle the on-the-ground execution.

This minimizes data transfer and leverages the strengths of each component.

This approach is designed to reduce the cost and privacy risks of using large models,..

The remote AI never sees the full, sensitive dataset (be it medical records, proprietary code, or financial data).

Practical Applications and Workflow

This hybrid model applies to numerous real-world scenarios:

Field

Local Model Task (The "Herd")

Remote Model Task (The "Elephant")

Document Analysis

Scans gigabytes of local logs, files, or code.

Receives small snippets or summaries to perform high-level analysis or answer complex queries.

Medical Research

Processes sensitive patient records on a secure local machine.

Receives anonymized, distilled sub-inquiries for advanced interpretation or to cross-reference with global research.

Business & Finance

Parses daily transactions and manages accounting data locally.

Is called upon to identify strategic anomalies or generate high-level financial insights from summarized reports.

Academic Research

Scans and indexes a personal library of research papers and drafts.

Helps refine a hypothesis, check citations against a vast external database, or suggest new research directions.

RS

*

Deep Dive into the Minions Framework

1. The Core Trade-Off

Every AI deployment faces a three-way tug-of-war between cost, performance, and privacy:

Cloud “frontier” models (e.g. GPT-4):

Pros: Best reasoning, huge context windows

Cons: High per-token fees, latency, full-data exposure

On-device LLMs (e.g. 7–13B parameter models on NPUs):

Pros: Low cost, instant response, data never leaves your machine

Cons: Limited context, weaker at multi-step reasoning

Minions bridges this gap by letting the frontier model orchestrate and delegate chunks of work to your local LLM,..

So you pay for— and expose to the cloud,.. Only those minimal snippets that truly need a powerhouse brain.

2. How Minions Orchestrates Work

Frontier as Task Manager

Ingests the entire user request.

Breaks it into subtasks: data cleaning, summarization, targeted Q&A.

Local LLM as Executor

Receives each distilled subtask + minimal context.

Processes it entirely on-device.

Returns results to the frontier for any final synthesis.

Iterative Refinement

For very large inputs, both agents trade richer messages—but still only what’s needed.

Accuracy climbs close to frontier-only levels, yet token spend plummets.

3. Nature’s “Herd” Tactics in AI

Minions didn’t borrow metaphors by accident, they mirror efficient, privacy-preserving collaboration found in ecosystems:

Beginning conception:

Elephant & Herd

Elephant (frontier) spots the goal, sends the herd (locals) off without sharing its full map.

Wolves & Ants

Wolves (frontier) chart the route; ants (locals) do the parallel grunt work.

Farmers & Cows

Farmers (remote) plan the harvest; cows (device) graze where directed, reporting yields in tiny batches.

4. Precision & Bit-Depth Considerations

When running local LLMs, model weight precision (4-, 8-, 16-bit) dramatically influences speed, memory, and fidelity:

4-bit Quantization:

Pros: Tiny footprint, ultra-fast inference

Cons: May lose nuance in complex reasoning

8-bit Quantization:

Sweet spot for many applications, balancing size and accuracy

16-bit / FP16:

Nearly full-precision, heavier but excels on tasks needing fine detail

Tuning your local hardware (NPUs/TPUs, memory bandwidth, on-chip caches) around these bit-depths can further push cost and latency toward zero.

6. Beyond Minions: Next Steps & Open Questions
Network Design: How do you architect LAN/WiFi or RDMA links to guarantee sub-100 ms hops?

Security Layers: Can you incorporate TPM-backed enclaves or JIT-verified code to harden the local agent?

Adaptive Delegation: What heuristics decide “local vs. remote”? Real-time performance profiling?

Model Evolution: As frontier models grow, can your local “herd” dynamically upgrade via federated distillation?

Embracing Minions means you no longer cross your fingers hoping an all-cloud or all-local solution suffices..
You choreograph a team that’s cost-smart, fast, and respects your data’s privacy

Rupert S

*****

Dual Blend & DSC low Latency Connection Proposal - texture compression formats available (c)RS

https://is.gd/TV_GPU25_6D4

Reference

https://is.gd/SVG_DualBlend https://is.gd/MediaSecurity https://is.gd/JIT_RDMA

https://is.gd/PackedBit https://is.gd/BayerDitherPackBitDOT

https://is.gd/QuantizedFRC https://is.gd/BlendModes https://is.gd/TPM_VM_Sec

https://is.gd/IntegerMathsML https://is.gd/ML_Opt https://is.gd/OPC_ML_Opt https://is.gd/OPC_ML_QuBit https://is.gd/QuBit_GPU https://is.gd/NUMA_Thread

On the subject of how deep a personality of 4Bit, 8Bit, 16Bit is reference:

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

https://science.n-helix.com/2022/10/ml.html

Friday, June 13, 2025

Dual Blend Mode with Vectors

CPU NPU GPU FPGA Dual source blending 2025 (c)RS : Intended target : Rendering & VESA related Direct Vectors for screen & CPU/GPU Presentation process, Machine Learning in the sense of using OpenCL & Direct-Compute & other API Involving 3D geometries.

Dual Blend Mode with Vectors


Dual Blend Mode comes in 3 parts to accelerate the display:

The CPU / GPU combined rendering is handled by Fence mode for synchronisation,

Vectors & textures are handled by the Multi Source Rendering pipeline & VecSR

https://science.n-helix.com/2022/04/vecsr.html

https://science.n-helix.com/2016/04/3d-desktop-virtualization.html

https://science.n-helix.com/2019/06/vulkan-stack.html


*

Hardware Acceleration

Dual Blending viable in the case of Offloading, Examples of Audio, 3D Audio, Video, 3D & other Dual blending modes,

The Fence & combined texture modes can be used in many fields that use PCM Graphs, Graphs & the general practice of blending sources together,

Primarily aimed at the concept of inset video & graphics, Audio is an FFT Graph you know!

Apply carefully, You never know when a CPU will help in combination with Hardware Accelerators..

USB, Motherboard Audio, GPU & so on.. A case exists for accelerating Bluetooth Gear from the JIT Compiler & Dongle,..

*

The Fencing plan: (c)RS


The Fencing plan is to layer actions at the speed CPU & GPU modify content in single frames,

With Vulkan & DirectX 12 We worked so hard to make the API front the GPU Directly so that the CPU is not stalling the game,..

In most cases we therefore use the GPU directly, The Origins for direct GPU Low latency API lay right with RS & AMD,

However AMD had a very hard time getting their API into other GPU Manufacturers Source trees..

Microsoft DirectX12 & DX11Warp & Vulkan/OpenCL are the results..

But we need to have a CPU, An APU is normally better but we have REBAR & RDMA for CPU to GPU Data Transfers,..

There are many small issues that face Vulkan & DirectX & the ANGLE API,..

What are these issues?:

Mouse & Pointer device delivers with IO & DMA Direct to the CPU

Fonts

Sprites

Polygon maps

Textures

come from the system & hence directly from the CPU in most cases,..

SDK & API CPU originated content:

Pointers

Memory routing

System control

QAT : DMA, IO & general system control & function.

We need a direct rendering path for the CPU, We have the CPU & We can use it!

Directly leveraging the CPU's functions that are unique:

FPU 183Bit High precision floats

AVX & SVE Direct parallel computation of a fairly high speed

Integer & Float general registers

We recognise that without proper coding most CPU Direct Display rendering, does not have..:

AntiAliasing

Supersampling

Smoothing

Dithering

HDR & WCG Automated colour control

We handle these functions in the following ways:

We pass the pre-computed intermediary to the GPU

We create code that does all these in the MMX & AVX SiMD Registers

We compose the frame at a larger scaling that the GPU will use for the final rendering..

Super Upscaling is our friend and there are many forms of upscaling to use,..

For most CPU related issues of jagged edges, The solution is that the Frame is drawn at 2x the resolution or a multiple of the final size.

We can also use SiMD Dithering & SuperSampling to handle the traditional CPU Deficit of jagged edges,..

We can also colour in greyscale & primary polygons with the GPU,..

So why? Whatever the deficits of the CPU are,..

The direct high precision qualities inside the FPU & AVX/SiMD for the CPU are at least Double the final quality of most GPU Functions..

CPU FPU & SiMD & Integral 32/64Bit functions can flourish the displayed content..

Presenting an educated SDK/API sampling for what the component Processing features are takes skill! We have it, It takes education.. We have it & we will have!

Composing the Final view point from all composing parts requires a specific set of solutions:

Frame jitter (misaligned SiMD, GPU, GPU, Audio)

Finalised frame : Gating .. Fence Mode for GPU & CPU

Synchronised & fast data transfers: Enhanced IO RDMA & Rebar

Security : AES, ECC & Enhanced media protocol DMA & TCP/UDP/QUICC Hyper Frame transport

These are my solutions, These are our solutions..

Rupert S

*

Fence Mode PTP Dynamic Regulation (c)RS


To conceptualise fence mode in codecs we need to do a little illustration..

I = Fence, Fences are timed with PTP Timers
D = Draw, tools CPU & GPU to fill frame, Because of the fences,.. All content is cleanly drawn

We can time fences from when finalised or draw them Dynamically timed based on Internal performance profiling & PTP Timers,..

We use PTP timers with HDMI & DisplayPort & the Display Panel & We can do the same for Audio & other dynamic elements too!

Also such as Harddrive & RAM & PCIe too!

I D I D I D I D I D I

If you like Frame timing of most varieties is very logical & most technology can use it!

For example Wheel & Shock Suspension Dynamic Timing & Pulmonary action of artificial hearts & heart stimulators,.. Need PTP Dynamic Regulation.

(c)Rupert S

*

QFT & VRR Fence mode: (C)RS


VRR & QFT & frame rate deviations over time..

What fence mode does is allow us to buffer a work block so all tasks are finished before we write frame Shader blocks..

We use ETA, Delivery Time & Estimated work time, To allow ML & DL to directly optimise the packet system..

Fence Mode is for DirectX, OpenCL, OpenGLES, Vulkan & VESA Displays..

CPU Rendering into the GPU SiMD Shader pipeline requires:

GL_NV_fence VK_KHR_present_wait2

https://www.phoronix.com/news/Vulkan-1.4.317-Released

https://developer.download.nvidia.com/assets/gamedev/docs/VertexArrayRange.pdf

https://registry.khronos.org/OpenGL/extensions/NV/NV_fence.txt

https://docs.amd.com/r/en-US/ug1784-versal-ai-gen2-gpu/Vulkan-Extensions

What Fence does is use properties to define a load group for display, We need to know that the CPU is 800Mhz to 5Ghz on average the phone,..

The Phone processor may be between 400MHz & around 2.5 (Quad core Sony) While the GPU is between 250Mhz & 1200Mhz,.. So..

When the CPU writes the Texture, Polygon & colour maps, The Cycle differentiators usually mean calculating the difference with fractions,..

CPU 2x The clock speed than GPU 2:1 Cycles per write, As an example, You can do it by polling the Frame rate & Write per frame on maths,..

Fence & Presence wait is where you set a frame delivery timeline, So we can deliver a single clear frame as a steady rate of Hz,..

Fence however does the conditional wait by groups of shaders, The relevance of this fact is that these days we use VRR & QFT & frame rate deviates over time.

The Fence solution is per screen block & We will use that to update per segment, VRR Fence mode.

Input threads, Core count multiplexed by average devision between CPU & GPU Clock Cycle Effective work,..

For example my FX8320E does 2 threads SMT per core.. So with 8 cores & 2 threads per CPU 16 total threads:

8 Cores, 2 Threads per core SMT : { a1, b1, c1, d1, a2, b2, c2, d2, a3, b3, c3, d3, a4, b4, c4, d4 };

CPU 3.5Ghz, GPU 2Ghz, So.. 3.5:2 reduces to about 1.3 to 1,

CPU AVAILABLE_ASYNC_QUEUES_AMD: 2, MAX_COMPUTE_UNITS: ?

GPU AVAILABLE_ASYNC_QUEUES_AMD: 2, MAX_COMPUTE_UNITS: 16

So at 16 CU per task, Both the CPU & GPU are fairly simple & the result is 1.3 to 1 or rather 4 to 3 when we make an approximate whole number of it..

Tasks Array at 4 to 3

A{1,2,3,4}
B{1,2,3,4}
C{1,2,3,4}
D{1,2,3,4}

This allows us to divide the screen into 16 Groups & Refresh them VRR/QFT at 2Ghz at a rate of 3.5 to 2 & 2000Mhz / 60Hz around 32x a frame,

At that approximate speed we could fully modify each zone 32x per frame,

In actual fact we would be using most of the clock cycles for Maths SiMD Tasks, Textures, Shading, 3D & 2D & DRAW..

We could still manage at least 5x per group : { a1, b1, c1, d1,a2, b2, c2, d2,a3, b3, c3, d3,a4, b4, c4, d4 };

We can Fence each zone & VRR / QFT as we want.

Rupert S

*

A Multiple Source Rendering Pipeline


Dual source blending is going to make a lot of sense for games,

Where DirectX12 removes the CPU from the game render target,

Dual Source in not just composing 2 Shaders in a single pixel array, It is also composing with more than 1 device,..

CPU & GPU & Also Parallel Multi Render pipeline..

Using direct CPU blends for menus & small polygon renders (in High resolution SR) Where the CPU non alpha blend makes sense!

Well it makes more sense when you can : MMX AVX SiMD Blends & Especially ADDER blends that can use the CPU Integer Instructions!

Observations of the CPU to GPU Pipeline are like so!

Texture creation can be expensive to CPU, So you cannot go far,

Simple Texture example, As in Simple to Compress on CPU

However you can use texture formats like Grey Scale Alpha : RA, RX, RGA to emulate grey shading for polygon draw, So called texture on top of, CPU Rendering,

SVG XML

Another format that can be used by the CPU is SVG & SVG allows rendering of polygons in an optimised layer or 3D Mesh,..

Polygons can be pre culled by the CPU from high resolution meshes & created as SVG XML

Polygon SVG / Font Dictionary Estimation

Fonts & Polygon cache fonts: SVG XML & Font Systems can compose dictionaries of polygon shapes to estimate the final result from Dictionary estimation..

How does it work?

You cube map your outline polygon (present in 3D Render or there is no work)

Estimate the best shape from a pre composed & optimised Polygon Font that has shapes in 2D & 3D in the dictionary,..

The result is that high quality pre composed polygons can be pushed into the ZBuffer & frame space,..

Both as a texture, & or cube map in ZBuffer for uploading to GPU,..

Allows dynamic content such as explosions & effects such as skin deformation & bones, noses, exetera to be hand crafted for the scene but dynamically made into the final render,..

Thus saving storage with pre compressed content.

Logical proof that shaders can add pre composed textures to emulate polygons...

Rupert S

*

Chrome Example : Dual source blending : RS


A game or chrome requires a UI, But we will discuss the process of rendering with the CPU & GPU Productively & well,..

Method list:

Dynamic Micro ZBuffers, We wish to render a depth array of polygons then a Micro ZBuffer is allocated to part of the screen & a depth,..

We will Assign an array of 10 Layers, In ML you use layers for dimensions & we will do the same,.. 10 layers is a reasonable amount for a web page,.. We could easily assign more!

Layers { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }, We can assign layers to groups : { A, B, C, D };

We assign a Micro ZBuffer with dimensions { X, Y } : { A, B, C, D } : { X, Y, Z } & Location Displacement on screen : { Xd, Yd };

We will be compressing our ZBuffers & We will be using:

SVG XML Polygons Packing for pre rendering,.. & Font Hinting to save further processing requirements

Processed MATHS XML

We can use Texture Conversion of we like!

Normally we would be flattening the layer on finalisation for ease of use,..

Rendering is one of Polygon arrays, SVG XML Polygons, Textures & fonts.

We will pass Compressed Micro Zbuffers back and forth between the GPU & the CPU to make the work look seamless!

We will thus be able to process MATHS XML on both the GPU & The CPU at the same time, Per frame

Rupert S

*

Cube maps & Micro ZBuffers


Method:

Now to assign a Micro ZBuffer or cube map, We have to fetch the full screen space size & map the screen space to cubes,..

We can Shader render each Cube Map & Micro ZBuffer with either Textures & SVG Polygon drawing or with a depth array ZBuffer,..

We can also allocate the Full ZBuffer from the task, But Allocating the Full Buffer is too large for our cache arrangement,..

So we allocate Micro ZBuffers & Cube Maps that we can draw polygon arrays into (For 3D & 2D AKA WebGL & WebGPU),..

We can also arrange RGBA Textures & SVG Polygons in layers or mapped to 2D & 3D Shapes,..

Cube Maps, Micro ZBuffers & Textures, SVG Polygons once Mapped to the buffer allow dynamic refreshing with low latency & Processor usage,..

ZBuffer & Cube Map Buffer

A, B, C, D, E, F, G
1:
2:
3:
4:
5:
6:
7:

Micro Allocation sample:

4 Block

Location C, D, 1, 2

Content 'Buffer Array' {(), (), (), ()}

If we move the screen wie can remap the displacement map & virually move it,..

If we allocate the entire screen space / Web Page / UI to the total space then we can displace in the total CPU / GPU ZBuffer,..

We can keep a small displacement map locally with a size of ZBuffer parameter that does not take too large a space in RAM / Cache.

Rupert S

*

Cube maps & Layered rendering : (c)RS


The problem: Efficiency

Right so chrome looks wonderful but Micro ZBuffers, Layers , Cube maps 10 PCX deep, aka 10 layers,.. GPU Usage 100% on simple Video pages with 4K Video & Chat window, Youtube & Social media & so on..

Now the stream looks very good, But 100% GPU,..

The 3D plan:

FSR, 4x AA, Super Sampling 16x Performance, Texture sharpening, blending etcetera on

Now i will iterate the following: Rendering methodology

Firstly The Micro ZBuffers, Cube maps, Polygon & texture layers should be as deep as required by the page,..

Total depth is a reference of 5 deep, So for example :

Overlay & Video

Micro Cube-Map / MicroZBuffer : 10x10pcx by 5 Deep for overlay,

3D & 2D Deep content

Micro Cube-Map / MicroZBuffer : 10x10pcx by 10 Deep x X, Y, Z, or maybe larger Cube-Map / ZBuffer Depth Sizes

Most mages would have about 4 layers:

Game / App / Chrome UI render layer 1

Fonts & Overlays Layer 2

Page layout Layer 3

Video Stream Layer 4

Application overlay Layer 1

Text overlays & Page UI Layer 2

Now in order of priority if the Video is priority, that needs to be layer 3

That is 3 to 5 layers of 1 pcx each

Now 3D Deep rendered UI with 3D Images, for example a plane radar or integrity page 3D Image in the UI means the same priority lists:

Game / App / Chrome UI render layer 1

Fonts & Overlays Layer 2

Page layout Layer 3

Boxes for animation 3D 4 : Deep Cubemaps

Video Stream Layer 5

3D & 2D Application content, such as a plane window 6 : Deep Cubemaps

Firstly they have to define depth, 2D Content is layer or cubemap, But not with a lot of depth,..

Firstly on the performance side, RX280 4G can render Frontier Elite Dangerous in 2000 x 1000 at 60FPS with FSR, 4x AA, Super Sampling 16x Performance, Texture sharpening, blending etcetera on..

Firstly the layers or cubemaps should each be as deep as required but no deeper,..

Video is preferred 1 layer deep &or Cube-Mapped on a single layer,..

Don't over analyse depth test on animated 2D content, Single test, Depth & run texture at the correct depth, Does not need to be refreshed, Unless changed.

Rupert S

*

3D Layers & 3D Geometric micro layers / Micro ZBuffers / Render tiles & other forms of mathematical geometry, For use in ML, Learning & Graphics presentation : RS


Optimising ML, Using Micro Tiling Dimensional Arrays, Sometimes separate so the Computation can be in parallel & also optimised per thread.

Machine Learning in the sense of using OpenCL & Direct-Compute & other API Involving 3D geometries.

Now it makes sense while regarding the other works in this document to think about Geometric, Volumetric & Layer Acceleration in Machine Learning,..

A common feature to use in Code & ML is OpenCL, JIT Compiler & Direct Compute,..

Maths are the primary strategic appliance of ML, Afterall Maths & calculations are the majority of our education & function as higher education, Work & life in research & practice ..

Common usage of dimensions in thought & Human, Machine, ML &or AI:

Common arguments on the maths arrangement of ML, Is reason, Now Greek Philosophers, Nay Scientists displaced water with the apple & founded Mass,..

Doctors measure wounds & count leassions & germs or viruses & cell counts for cancer!

Engineers need to measure a bridge or create one with the required strengths, mass & tension & ofcourse the desire for that to look good too, So aesthetics!

Dimensional parameters are used to create rules by evolution in ML, That is to say that we measure the "Game of Life", If you don't know the game of life,..

G.O.L is normally germs, microbes, ants & other life forms such as Humans, Humans? Yes! Rogue is a common game of life game that has existed so far back that it was drawn in ASCII Text on an IBM 16 Colour computer & a BBC Micro!

So we need dimensions for something like 80% of all ML is dimension related maths..

We can use dimension size priority from the earlier work in this document & state that we will be optimising the ML, Using Micro Tiling Dimensional Arrays, Sometimes separate so the Computation can be in parallel & also optimised per thread.

So we will be using the following concepts in ML & Application Gaming:

Layers, Cube-Maps & Micro ZBuffers present our dimensional arrays & the methods by which we shall compress & optimise our operations..

Buffer & Micro ZBuffer Technology:

Layered Drawing : { 3D & 2D }

SVG Vector : { 3D & 2D }

Texture Format sent directly to the display : { 3D & 2D }

DSC Frame, Directly Rendered

Codecs & Frame By Frame : Texture & SVG + Vectors

Machine Learning & Draw related functions :

N Cubic < N 2 + 1 & so on, Gather & Scatter, Layer, Dimension & so on

https://www.w3.org/TR/webnn/#api-mlgraphbuilder-gathernd

The relevance to us is that both WebGPU/GL & WebNN can scatter & group,..

Known as multithreading & Single Thread performance modes:

Gather them into optimised groups

Scatter them over an array of independent tasks,

Combine tasks on a CPU... for single thread heavy

Scatter them so that we can parallel thread..

Tessellate between them

Combine or multi thread Polygon &or draw

Polygons for example in dense fields require Grouping & Scattering, .. So we can:

Do .. Work & #DoWorkSocial

Rupert S

*

Graphite presents...


https://blog.chromium.org/2025/07/introducing-skia-graphite-chromes.html

https://science.n-helix.com/2025/06/dualblend.html

Dual Source Blending parallel pipeline,..

This presents the improvement of APIs like Vulkan, Metal and D3D12 & multithreading and expose new GPU capabilities,..

Yes dual Source Blending is here to stay!

RS

2D Depth Testing Assigns each draw a z-value, enabling early rejection of occluded opaque primitives and clipping via the depth buffer rather than a software clip stack..

This dramatically slashes overdraw and simplifies shader state management.

Multithreaded Recording Independent Recorders on worker threads assemble command buffers in parallel..

The main GPU thread only submits pre-built recordings, keeping scheduling, compilation, and CPU-heavy work off the critical path.

Consolidated Pipeline Variants Instead of Ganesh’s explosion of specialized shaders, Graphite merges similar draw cases into fewer pipelines..

By precompiling these at startup, it avoids mid-frame jank from on-the-fly shader builds.

Future Directions

True multithreaded rasterization across tiles or threads

Compute-based path rasterization (e.g., Pathfinder-style) for higher quality MSAA or CPU offload

Dynamic re-issuance of Graphite recordings to reduce GPU memory for simple, frequently translated content

Dual-source blending lets a fragment shader emit two colour outputs into a single render target slot,..
Giving the blend unit two independent source factors per pixel..

This doubles the inputs to the blend equation and enables advanced effects like order-independent transparency or cel shading without extra passes.

OpenGL / Vulkan

Fragment outputs: layout(location=0, index=0) out vec4 src0; layout(location=0, index=1) out vec4 src1;

Blend factors: SRC1_COLOR, ONE_MINUS_SRC1_ALPHA, etc.

Vulkan requires querying the dualSrcBlend feature (an extension) to enable the enums and blend operations.

D3D11 / D3D12

Shader outputs: SV_Target0 and SV_Target1 map to SrcBlend and SrcBlendAlpha SRC1_* blend enums in the output-merger stage.

Only slot 0 supports dual-source blending on most hardware; writing other targets is undefined.

WebGPU

The dual-source-blending feature adds WGSL’s @blend_src attribute at @location(0), letting you choose "src1", "one-minus-src1", "src1-alpha", etc., in your pipeline’s blend descriptor

Rendering Pipeline Stage Breakdown

Stage
Description

Input Assembler
Bind vertex/index buffers, define topology

Vertex Shader
Transform positions, forward varyings (e.g., UVs)

Primitive Assembly
Assemble primitives into triangles

Rasterizer
Scan-convert triangles, generate fragments

Fragment Shader Emit two outputs (src0, src1) for dual-blend

Depth Test
2D depth testing orders opaque draws to minimize overdraw

//Variant1


Output Merger (OM)
Dual-source blending: final = src0 * F(src0, src1) + src1 * G(src0, src1); write depth/stencil Framebuffer Write
Store the blended color and updated depth value

// Device & Swapchain Setup

// Request WebGPU device with dual-source blending feature
wgpu::DeviceDescriptor deviceDesc{};
deviceDesc.requiredFeatures = { wgpu::FeatureName::DepthClamping, wgpu::FeatureName::DualSourceBlending };
auto adapter = instance.RequestAdapter();
wgpu::Device device = adapter.RequestDevice(&deviceDesc);

// Configure swapchain and depth buffer
wgpu::TextureFormat colorFmt = wgpu::TextureFormat::BGRA8Unorm;
wgpu::TextureFormat depthFmt = wgpu::TextureFormat::Depth24PlusStencil8;
CreateSwapchain(device, colorFmt);
CreateDepthTexture(device, depthFmt);

// Shader Modules (WGSL)

// Vertex shader (passes position + UV)
[[stage(vertex)]]
fn vs_main([[location(0)]] pos: vec3<f32>,
[[location(1)]] uv: vec2<f32>)
-> [[builtin(position)]] vec4<f32> {
return vec4<f32>(pos, 1.0);
}

// Fragment shader (dual outputs)
[[stage(fragment)]]
fn fs_main([[location(1)]] uv: vec2<f32>)
-> [[location(0), blend_src]] vec4<f32>,
[[location(0), blend_src(1)]] vec4<f32> {
let baseColor = textureSample(colorSampler, uv);
let glowMask = vec4<f32>(uv.x, uv.y, 0.0, 1.0);
return (baseColor, glowMask);
}

// Pipeline Layout & Blend State

// Color target with dual-source blend enabled
wgpu::BlendState blend{};
blend.color.srcFactor = wgpu::BlendFactor::Src;
blend.color.dstFactor = wgpu::BlendFactor::Src1;
blend.color.operation = wgpu::BlendOperation::Add;
blend.alpha.srcFactor = wgpu::BlendFactor::OneMinusSrc;
blend.alpha.dstFactor = wgpu::BlendFactor::Src1Alpha;
blend.alpha.operation = wgpu::BlendOperation::Add;

wgpu::ColorTargetState colorTarget{};
colorTarget.format = colorFmt;
colorTarget.blend = &blend;
colorTarget.writeMask = wgpu::ColorWriteMask::All;

// Depth-stencil: 2D depth test for opaque draw reordering
wgpu::DepthStencilState depthState{};
depthState.format = depthFmt;
depthState.depthWriteEnabled = true;
depthState.depthCompare = wgpu::CompareFunction::Less;

// Build the render pipeline
wgpu::RenderPipelineDescriptor pDesc{};
pDesc.vertex.module = vsModule;
pDesc.fragment.module = fsModule;
pDesc.depthStencil = &depthState;
pDesc.multisample = {1, 0, false};
pDesc.colorStates = &colorTarget;
pDesc.colorStateCount = 1;
auto pipeline = device.CreateRenderPipeline(&pDesc);

// Multithreaded Recording & Submission

// Worker thread function
void RecordDrawCommands(CommandRecorder& rec, Mesh& mesh) {
rec.Begin();
rec.SetPipeline(pipeline);
rec.SetBindGroup(0, mesh.bindGroup);
rec.SetVertexBuffer(0, mesh.vertexBuffer);
rec.SetIndexBuffer(mesh.indexBuffer);
rec.DrawIndexed(mesh.indexCount);
rec.End();
}

// Main submission loop
CommandRecorder rec1(device), rec2(device);
std::thread t1(RecordDrawCommands, std::ref(rec1), std::ref(meshA));
std::thread t2(RecordDrawCommands, std::ref(rec2), std::ref(meshB));
t1.join(); t2.join();

// Submit both recordings in a single frame
wgpu::CommandBuffer cmds[] = { rec1.Finish(), rec2.Finish() };
wgpu::Queue queue = device.GetQueue();
queue.Submit(2, cmds);


//*****
// V2 C
// Worker-Thread Command Recording (C++)


void RecordDrawCommands(CommandRecorder& rec, const Mesh& mesh) {
rec.Begin();
rec.SetPipeline(pipeline);
rec.SetBindGroup(0, mesh.bindGroup);
rec.SetVertexBuffer(0, mesh.vertexBuffer);
rec.SetIndexBuffer(mesh.indexBuffer);
rec.DrawIndexed(mesh.indexCount);
rec.End();
}

// Spawn threads, record, then submit:
CommandRecorder rec1(device), rec2(device);
std::thread t1(RecordDrawCommands, std::ref(rec1), meshA);
std::thread t2(RecordDrawCommands, std::ref(rec2), meshB);
t1.join(); t2.join();
queue.Submit(2, { rec1.Finish(), rec2.Finish() });

// Example: WebGPU Setup with Dual-Source Blending & 2D Depth Test

// Request device with features
wgpu::DeviceDescriptor deviceDesc{};
deviceDesc.requiredFeatures = {
wgpu::FeatureName::DepthClamping,
wgpu::FeatureName::DualSourceBlending
};
auto adapter = instance.RequestAdapter();
auto device = adapter.RequestDevice(&deviceDesc);

// Swapchain & Depth Buffer
CreateSwapchain(device, wgpu::TextureFormat::BGRA8Unorm);
CreateDepthTexture(device, wgpu::TextureFormat::Depth24PlusStencil8);

// WGSL Shaders
// Vertex: passes pos+UV
[[stage(vertex)]]
fn vs_main([[location(0)]] pos: vec3<f32>,
[[location(1)]] uv: vec2<f32>)
-> [[builtin(position)]] vec4<f32> {
return vec4<f32>(pos, 1.0);
}

// Fragment: emits baseColor & glowMask
[[stage(fragment)]]
fn fs_main([[location(1)]] uv: vec2<f32>)
-> [[location(0), blend_src]] vec4<f32>,
[[location(0), blend_src(1)]] vec4<f32> {
let baseColor = textureSample(colorSampler, uv);
let glowMask = vec4<f32>(uv.x, uv.y, 0.0, 1.0);
return (baseColor, glowMask);
}

// Blend State for dual-source
wgpu::BlendState blend{};
blend.color.srcFactor = wgpu::BlendFactor::Src;
blend.color.dstFactor = wgpu::BlendFactor::Src1;
blend.color.operation = wgpu::BlendOperation::Add;
blend.alpha.srcFactor = wgpu::BlendFactor::OneMinusSrc;
blend.alpha.dstFactor = wgpu::BlendFactor::Src1Alpha;
blend.alpha.operation = wgpu::BlendOperation::Add;

wgpu::ColorTargetState colorTarget{};
colorTarget.format = wgpu::TextureFormat::BGRA8Unorm;
colorTarget.blend = &blend;
colorTarget.writeMask = wgpu::ColorWriteMask::All;

// Depth-Stencil State
wgpu::DepthStencilState depthState{};
depthState.format = wgpu::TextureFormat::Depth24PlusStencil8;
depthState.depthWriteEnabled = true;
depthState.depthCompare = wgpu::CompareFunction::Less;

// Pipeline Descriptor
wgpu::RenderPipelineDescriptor pDesc{};
pDesc.vertex.module = vsModule;
pDesc.fragment.module = fsModule;
pDesc.depthStencil = &depthState;
pDesc.multisample = {1, 0, false};
pDesc.colorStates = &colorTarget;
pDesc.colorStateCount = 1;
auto pipeline = device.CreateRenderPipeline(&pDesc);

RS

*

Deep Random forest

ML for tasks such as Audio 3D is basically a Deep Random forest,

Basically a Gaussian mesh that is optimised over days,

In essence once trained they require almost no processing,

Think of a random forest as 9000 option choices in a configuration.

You may begin training Random Forests to your hearts content,

The main content XML Tables & option choice lists,..

Compress with GZIP, Deflate, LZ4 & done!

Moderately simple tasks with a regular tick, Such as pace makers & car wheels, Enhancement

RS

*

Direct Vectors : A deeper View : VESA, Displays & Applications

https://en.wikipedia.org/wiki/Matrix_(mathematics)

High Performance Direct Rendering & Indirect Texture Creation & Presentation,..

Expected Hardware for modern 4K & 8K TV & Monitor : Mali GPU & ARM CPU with Vector SiMD, X64 AMD/Intel X86, RISCV + Vec,..

With these capacities we can! #YesWeCan

VESA & HDMI drawing directly to the Frame

DSC & Texture Formats from the CPU & GPU is a logical choice, So we directly write Vector Drawing directly to the Texture Format & with Anti Aliasing, Super Sampling & Dithering Error Reductions for HDR & WCG,..

Direct Vector is where we send Vectors along the pipeline to the display,..

We can simplify the contents as 2D with SVG Polygons & Flat texture rendering,..

We can make it complex & use 3D ZBuffers or Layered Rendering, For common usage we would prefer to flatten, Apart from 3D TV's & VR! Where 3D Input has more processors available on the display..

Send that directly to the Display from the GPU & CPU.

Table :

Internally rendered from CPU & GPU & Sent to display : Direct & Indirect, Device to device rendering pipeline.

Buffer & Micro ZBuffer Technology:

Layered Drawing : { 3D & 2D }

SVG Vector : { 3D & 2D }

Texture Format sent directly to the display : { 3D & 2D }

DSC Frame, Directly Rendered

Codecs & Frame By Frame : Texture & SVG + Vectors

Layers, Cube-Maps & Micro ZBuffers present our dimensional arrays & the methods by which we shall compress & optimise our operations..

The VBE Video Bios Extensions have not been updated, So we will make these!

But some 2D & 3D SDK will be useful!

The objective being to accelerate the HDMI & VESA Display Ports, The Displays, The applications such as Games, Chrome, Angle, DirectX & OpenCL/GL, Vulkan & Metal

https://shawnhargreaves.com/freebe/freebs12.zip

https://github.com/google/angle

Reference

https://is.gd/SVG_DualBlend https://is.gd/MediaSecurity https://is.gd/JIT_RDMA

https://is.gd/PackedBit https://is.gd/BayerDitherPackBitDOT

https://is.gd/QuantizedFRC https://is.gd/BlendModes https://is.gd/TPM_VM_Sec

https://is.gd/IntegerMathsML https://is.gd/ML_Opt https://is.gd/OPC_ML_Opt https://is.gd/OPC_ML_QuBit https://is.gd/QuBit_GPU https://is.gd/NUMA_Thread

(C)Rupert S

Additional information on VBE

The VBE Bios Extensions have not been updated, So 2D & 3D Drawing may not be standard

"VESA Bios version 3.0 (access to linear framebuffer video memory, high speed protected mode bank switching, page flipping, hardware scrolling, etc), and adds the ability to use 2D hardware acceleration in an efficient and portable manner"

2D+3D Acceleration Reference Video-Bios-Extension V3

http://www.petesqbsite.com/sections/tutorials/tuts/vbe3.pdf

https://en.wikipedia.org/wiki/VESA_BIOS_Extensions

https://www.thejat.in/learn/vesa-bios-extensions-vbe

https://shawnhargreaves.com/freebe/

https://shawnhargreaves.com/freebe/freebs12.zip

https://www.drdobbs.com/architecture-and-design/examining-the-vesa-vbe-20-specification/184409592

*****

References

https://science.n-helix.com/2019/06/vulkan-stack.html

https://science.n-helix.com/2022/04/vecsr.html

https://science.n-helix.com/2016/04/3d-desktop-virtualization.html

https://science.n-helix.com/2025/06/dualblend.html

VSR https://drive.google.com/file/d/1hewfYqLmY0z-Am800LMR-6H-P5J0Sr0N/view?usp=drive_link

VecSR https://drive.google.com/file/d/1WDvpD9a6TttMTmIz_sRYWaQT3RExBuSq/view?usp=drive_link

https://science.n-helix.com/2022/10/ml.html

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

https://science.n-helix.com/2022/06/jit-compiler.html

https://science.n-helix.com/2022/08/jit-dongle.html

https://science.n-helix.com/2022/09/audio-presentation-play.html

Innate Compression, Decompression

https://science.n-helix.com/2022/03/ice-ssrtp.html

https://science.n-helix.com/2022/09/ovccans.html

https://science.n-helix.com/2023/02/smart-compression.html

Tuesday, October 29, 2024

TLS - Secure Negotiation & Transfer agreements in a modern IOT Friendly way, With PSK, ML-KEM's & ASCON

5 Way HAND https://is.gd/ECH_TLS : AES AlaML-KEM Falcon DES5 00:33 20/10/2024 - 2018 Rupert S


TLS - Secure Negotiation & Transfer agreements in a modern IOT Friendly way, With PSK, ML-KEM's & ASCON

in reference to :


https://csrc.nist.gov/Projects/block-cipher-techniques

https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8459.pdf
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar3.ipd.pdf

ECH first, Client interactions with server (DNS is first though)

https://developers.cloudflare.com/ssl/edge-certificates/ech/
https://datatracker.ietf.org/doc/draft-ietf-tls-svcb-ech/

PSK & Updating DNS Security Profile

https://datatracker.ietf.org/doc/draft-eastlake-dnsop-rfc2930bis-tkey/

PSK & Updating DNS Security in use

https://datatracker.ietf.org/doc/draft-ietf-uta-tls13-iot-profile/

https://datatracker.ietf.org/doc/draft-ietf-tls-extended-key-update/

Logging keys leads to debugging & Kracks in the wall with eyes

https://datatracker.ietf.org/doc/draft-ietf-tls-ech-keylogfile/

https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/

related to

Also https://www.logitech.com/content/dam/logitech/en/business/pdf/logi-bolt-white-paper.pdf

ASCON may be right for you, If you are in IOT & can barely breath on 33mhz https://is.gd/DictionarySortJS

PSK, ML-KEM, AES

https://is.gd/ECH_TLS
https://is.gd/KeyBitSecurity
https://is.gd/AES_Strengths

https://science.n-helix.com/2022/03/ice-ssrtp.html

https://science.n-helix.com/2024/10/ecc.html

https://science.n-helix.com/2024/10/tls.html

RS

*

ID-Matrix-dev-random - AnonCRT - Generating public keys involving matrix operations
https://is.gd/MatrixGenID

In this example a Matrix M² is used with dev/random to develop a certificate ID of anonymous nature..

The common attribute is that dev/random & attached data are used to generate a key ID, Personal & Server,

Usage such as CC cards, ID & Radio or mobile data & wifi..

The principles of the cert chain!

RS

https://is.gd/ECH_TLS

*

RSA 2048 + ECC Chaining, I would like to be clear RSA 2048 is 4x the certificate ECC 384 Certs are with ECC included in RSA Protocols,


While it is easy to inside crack an RSA on a 300 point Quantum computer worth an estimated 2 Billion $,

It is not that easy for the gamer or crack-ware

DT 'All-serious gamer', Rupert "The-All-Effort"

*

The first effort: RS

(Client or Server) : Compression

Speed of course! & Bandwidth...

Common use of compression speeds up the internet, The list is (with directories) : LSTD, Brotli-G, GZip, Deflate

The first principle to bear in mind for certificates is that most code will not repeat very often..

However ECC is a curve & if you know your own? You can compress it!

Bear in mind that prefetching a curve tells others, You may have it (client or server)

A common principle of the data hoarder like a certificate server is space! Space costs money! & Time..

Common things to compress? Almost everything!

Key Points:

Compression Techniques:

LSTD, Brotli-G, GZip, Deflate:
These are common compression algorithms used to reduce file size and improve transmission speed.

Certificate Compression:

ECC Curve Compression:
By knowing the specific curve used; Compression can be applied to reduce storage and transmission overhead.

Prefetching Considerations:
Prefetching a curve can signal its availability to others; Which can have security implications.

Space Optimization:
Compressing certificates and other data can reduce storage requirements.

Time Efficiency:
Compression can speed up data transfer and processing.

Complexity of Certificate Compression:
Implementing certificate compression can be complex and requires careful consideration of cryptographic algorithms and security protocols.

While compression improves efficiency, it potentially creates risk,
Compression can make data more susceptible to certain attacks.

Rupert S

*

PSK & Fast ECC Encryption : Encoded DNS & LSTD Adoption through compressible strings:


Firstly Secure Encrypted DNS exists, Secondly Cloud DNS Exists..

So location is not ID! or IP..

As stated in this document PSK Early Secret extraction is less of a problem for the following reasons:

Similar strings of length as pointed out by the NIST recommended passwords?

Memory but also compression!

Complexity is an object.. Hard to compress, Hard to remember & recall! But not impossible...

But later yes? When we know things about what we want..

Compressed secrets are low latency quick sends!

You have to bear in mind that PSK slope or PSK Escalation? Yes that is where you move onto more complex strings!

Bear in mind that early adoption of a pool of Random strings.. Takes space in a DNS or server Cloud Host archive!

Quick string PSK is a highly compressible and undeniably hackable version..

However our aim is the following:

UDP is pseudo-random
TCP is logical

Under these conditions & in a tunnel; PSK Compression on first ETA.. Is a clear clean 0 to 60 (in car terms),

Fast & Furious is our moto!

RS

*

PSK : Limited Exposure


Exposing a 64Bit, 80Bit, 128Bit key to the wind? Special requirements

ASCON versions have appeared to support PQC Light, So you know there is potential!

Military Air & Navy recommend 128 Bit PSK, Really some craft have computers big enough for 64 Bit,

64Bit is not ideal; But in the limited exposure field of Landing; Docking & Traveling over 4KM²; 64Bit still holds ground!

With special encryption: ECC & DES3/4/5 Mode : AES, ASCON, ...

The relevance of specialist encryption techniques, Described by the Light Encryption category :

https://csrc.nist.gov/Projects/lightweight-cryptography/finalists

Light Cryptography specialised as : ECC Mode { Insert mode here } : { Bit Depth }

We have potential!

PSK EHDSA

*

ECDSA,ASCON, AES, ML-KEM, Falcon, Dilithium, :


https://csrc.nist.gov/Projects/lightweight-cryptography/finalists

https://csrc.nist.gov/Projects/post-quantum-cryptography/publications

Option 1:

Delivering a Key Ramp..

Simple 8Bit key with high compression ratio first ? Data latency allows unnoticeable first key with LSTD Compression

8Bit PSK
It should be reasonable to assume that an 8 digit PSK is 8Bit or 16Bit with UTF-8,

Next delivery of either a 64Bit, 128Bit PSK.. An exchange of 64Bit PSK from client & 128Bit from server?

Potentially dual encryption..
Low complexity hardware

Both directions Key Encrypted Data.

PSK Pre Share Key (through DNS, Preferable Auto from Registered DNS & Cloud Provider)

PSK Key pool delivers key on first contact to server,

PSK Key length escalation, Thoughts..

4 Key DES is in principle the timed exchange ok keys, Now as you know with ECH Enhanced Client Hello (Cloudflare - NIST - Standards W3 - RS),

As you may know an open secret is exchanged first before a security certificate; The exchange protocol:

Exchange protocol:

Preliminary contact protocol:

Escalating Ramp:

Modes suitable for DNS, 0.8us exposure

8Bit }
16Bit }
32Bit } shared many key

Secondary key generation

64Bit }
128Bit }
256Bit }
512Bit } Multiples for ECC, DES3/4/5 Mode

Rupert S

It shall be known that with ECC, AES delivers a time related encoding

Option 1+2: The Key Exchange

Next delivery of either a 64Bit, 128Bit PSK.. An exchange of 64Bit PSK from client & 128Bit from server?

Potentially dual encryption..
Low complexity hardware

On existence of a key

Dilithium, Falcon Key delivery

The client shall receive a key for deliveries to server, Potent /dev/random Key..
Server shall deliver a reception key to server verified certificate..

The Client & Server have their own origin certificate..

If Without a personal key; The client shall have a cooky key from dev/random key creation or a client pool!

If the client has a personal Cooky Key hash or a Client Key, Server shall be in reception of encrypted data..

Both directions Key Encrypted Data.

Reference: https://is.gd/ECH_TLS

Rupert S

*

DES5, ECC, : ML-KEM, AES


ECC & DES3/5

Insertion of certificate verified key exchange with verified return stub key (verified against contact key)

3 to 5 minute timed; multiple /dev/RND stub key exchanges to change pattern..

Variable 3 Port timed; 1 to 3 ports transmission from source to end point,

To stop port flooding, single arrival port.

Exchanges between server & client to involve multi round pollinated STUB Certificate exchange & use.

ECC & DES3/5

Represents Stub Certificate exchange:

----+++++-----+++++---
-----++---+++---+++---
++++---+++---+++---+++

Rupert S

*

Key Exchange Protocol with ECC, AES


The provided text outlines a proposed key exchange protocol that leverages ECC and AES for enhanced security and flexibility.

Here's a breakdown of the key components:

Preliminary Contact and Key Establishment:

PSK (Pre-Shared Key): A shared secret is established between the client and server using DNS or a cloud provider.

Key Length Escalation: The PSK length can be increased over time to enhance security.

ECC and AES: ECC is used for key exchange, while AES is used for symmetric encryption.

Key Delivery and Encryption:

Option 1: Key Ramp:

A simple 8-bit key with high compression is initially exchanged.

Subsequent exchanges involve larger keys (e.g., 64-bit, 128-bit) to strengthen security.

Dual encryption can be considered for added protection.

Option 2: Dilithium or Falcon:

The client receives a key from /dev/urandom for sending data to the server.

The server delivers a reception key to the client, verified against the server's certificate.

If the client doesn't have a personal key, it uses a cookie key or a client pool key.

Stub Certificate Exchange:

A mechanism is proposed to periodically exchange stub certificates for added security.

This involves multiple /dev/urandom key exchanges and transmission through variable ports to prevent port flooding.

Key Points and Considerations:

The protocol aims to provide a secure and flexible key exchange solution.

It incorporates ECC for key exchange and AES for encryption, offering a strong combination.

The option to use Dilithium or Falcon for key delivery provides additional flexibility.

The stub certificate exchange mechanism adds a layer of security by periodically changing the keys.

Potential Improvements:

Additional Security Measures: Perfect forward secrecy (PFS) to protect against compromise of long-term keys.

Performance Optimization: Evaluate the performance impact of the proposed protocol, especially in terms of latency and computational overhead.

Compatibility: Ensure compatibility with existing standards and protocols to facilitate widespread adoption.

Overall, the proposed key exchange protocol presents a promising approach that combines ECC, AES, and additional security mechanisms..

By addressing the identified areas for improvement, It can potentially contribute to a more secure and robust communication environment.

RS

******** Reference Material :>


Session EEC/RSA/AES/Encryption Key Connection Protector - Certificate (c)RS + Reward welcome

The 1024/2048/4096 cert spawns the EEC cert pair as elliptic Curves based on the primary...

the curve cert is responding through TLS and QUIC to the eec key,

Formed temporarily from the local public key & or user certificate!

The computation of verification comes from the ability of the connection,

To provide several versions of the certificates EEC temporary cert (lasts one hour for example)

multiple EEC cert variants all come from a common root cert,

Therefore the server and user can talk enciphering both ways in a complex manner,

That is complex to spy upon.

The same methodology produces verifiable source certificates of sizes 512 to 8192(For example)

That can then do RSA and AES and other cyphers from larger base certificates,

Also same size hashed & cyphered Cryptographic pairs.

Hence the use of a hidden session cooky :

(AES:RSA Encrypted and temporarily anonymously IP Locked - refreshed on ip change (for ISP changes to ip)

This is very important, also user anonymous certificates! equates a temporary,

Subcert & session ECC Elliptic Curve

Such is the way that a local P11 Connection can make a local temp session EEC Elliptic RSA AES

(Copyright) Rupert S

https://science.n-helix.com/

I suggest the cloud UID for verification HMAC or a constant sent to the user per day/Session..

Frankly if the code AES we use is in plain script people could spy it..

I think spies do spy cookies & they do steal logins this way!

HMAC the AES of the UID code or send an AES/HMAC code inside a personal JS,

That echo's the cloud key for decryption; A Worker..

The communication with the server JS Security Encipher would most certainly..

Make hacking the Security EEC Server Certificate communications very hard to accomplish.

Cloud edge JS Encode to a local worker & from the local worker to edge & server.

The process in called Dual Edge Encrypt Factor : DE²F

Interesting code for security https://developers.cloudflare.com/workers/examples/signing-requests

Reference: https://drive.google.com/file/d/1WmhMcCZZjDI4pKnQsccvaf4RdquhPPs8/ https://is.gd/ECH_TLS

https://is.gd/DictionarySortJS

https://is.gd/UpscaleWinDL

https://is.gd/HPC_HIP_CUDA

https://is.gd/UpscalerUSB_ROM

https://is.gd/OpenStreamingCodecs

********* Really 2018, But really DES3 1980's************


'virtio-crypto: implement RSA algorithm'

Hardware Drive & System RAM 'DES 4 Key 64Bit & 128Bit AES & PolyChaCha & the Chinese one'

for protocols a very good idea & not CPU intensive>

Is 64Bit AES Even supported in crypto hardware : https://lkml.org/lkml/2022/3/1/1428

64Bit 4 Key is a potential with DES & may well work far faster than 128Bit (64 Bit processors)

In the case of HDD Drives & VM Drives may be transparent..Offers security:

1 key per drive layer : 4 Platters = 4 Keys

16 Platters = 8 Keys or 4 Keys

(c)RS 2022

https://bit.ly/VESA_BT

*******

Support rsa & pkcs1pad(rsa,sha1) with priority 150.

Test with QEMU built-in backend, it works fine.

1, The self-test framework of crypto layer works fine in guest kernel

2, Test with Linux guest(with asym support), the following script

test(note that pkey_XXX is supported only in a newer version of keyutils):

- both public key & private key

- create/close session

- encrypt/decrypt/sign/verify basic driver operation

- also test with kernel crypto layer(pkey add/query)

All the cases work fine.

rm -rf *.der *.pem *.pfx

modprobe pkcs8_key_parser # if CONFIG_PKCS8_PRIVATE_KEY_PARSER=m

rm -rf /tmp/data

dd if=/dev/random of=/tmp/data count=1 bs=226

openssl req -nodes -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -subj "/C=CN/ST=BJ/L=HD/O=qemu/OU=dev/CN=qemu/emailAddress=qemu@qemu.org"

openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER -out key.der

openssl x509 -in cert.pem -inform PEM -outform DER -out cert.der

PRIV_KEY_ID=`cat key.der | keyctl padd asymmetric test_priv_key @s`

echo "priv key id = "$PRIV_KEY_ID

PUB_KEY_ID=`cat cert.der | keyctl padd asymmetric test_pub_key @s`

echo "pub key id = "$PUB_KEY_ID

keyctl pkey_query $PRIV_KEY_ID 0

keyctl pkey_query $PUB_KEY_ID 0

echo "Enc with priv key..."

keyctl pkey_encrypt $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.priv

echo "Dec with pub key..."

keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.priv enc=pkcs1 >/tmp/dec

cmp /tmp/data /tmp/dec

echo "Sign with priv key..."

keyctl pkey_sign $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 hash=sha1 > /tmp/sig

echo "Verify with pub key..."

keyctl pkey_verify $PRIV_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1

echo "Enc with pub key..."

keyctl pkey_encrypt $PUB_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.pub

echo "Dec with priv key..."

keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.pub enc=pkcs1 >/tmp/dec

cmp /tmp/data /tmp/dec

echo "Verify with pub key..."

keyctl pkey_verify $PUB_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1

*****

Ascon, Story, (only something the military would appreciate), DT


Now you may feel this is a bunch of talawaki! Well fine! Walla Walla :p

Now you know the birdman(& women) story; Now to refine a point about ASCON & how good it is?

When I was convincing the officers I was talking to Birdmen...

I had my reasons, The improvement of the electron microscope; The antigravity; The analysis...

Yar Yar, But hay? you know something? ASCON is great!

So they gave me permission to carry the formula of ASCON to the birdmen with some conditional requirements,

Desires for technology...

So as I stood with the science officer I said; So the base officers have something to share...

Oh you know man may not be a super being; but he can be underrated!

So I unfolded a piece of paper with a maths formula and some; you know 'Demands' as the French say Desires!

So the Birdman scientist looked at it for a second and .... looked at it...

What is this nonsense....

I DON'T KNOW.... I thought you WERE... Clever :P & I winked!

He looked some more! EURIKA, Not so fast....

Can you do better?

This is good yes, Astounded but oh my god! They shared that with us!

Yes they did and if you can come up with something new.... To add to it...

& Some other things; You & I & some Muscle Bigos can visit the base...

Would you like that? Arrangements were made...

Something Found!

Nothing is known of Ascons more advanced models & most probably... it is unlikely they ever will.

All you need to know is...

ASCON IS GREAT!

Duke Thrust

*****

Skipjack, DES3, GCM, A story for gamers about the Logitech G Series gamer mouse! If Aliens are not enough, Try gamers & cheaters


Once upon a time there was a contest in Asia...

Yes I know , astounding! :L Well anyway the contest was on Euro-Gamer live! So you know how long winded the interviews are before the contest?

The interview was 1.3 hours & the guys had the gaming rigs setup...

The guy had his mouse 'Plugged in' To his Plug/Adapter 'Radio init'

In the audience were a group of malcontents...

Malcontents with hacking radio adapters!

They hacked his Radio over 1 hour of interviews...

But something gave them away..

Network traffic; The sniggering...

The shuffle of feet & conversation...

You know detective work! & you Do Know that they have detectors for this kind of harassment? Right, you know they do!

Radio jamming, Scamming, hacking, falsification.... Theft & robbery!

They got one of them; Don't matter... We got the code!

He turned off & on his gear... his mouse, his headset...

You know what? THE CODE CHANGES!

Hail Logitech G, Hail you the gamer!

Duke Thrust

ECC - Elliptic Matrix - Lattice Maths - RS

Elliptic Matrix - Lattice Maths


Lattice Square cohesive, Time Stamp Elliptoid 

(c)Rupert S

Elliptic in out

*

Matrix Formula M.A.P & AVX Computed parallel instruction

We can either repeat loop solves : (cos(b), sin(b)) * a + mean,
Or we can form a table matrix

(cos(b), sin(b)) = x , * a + mean = y

     1      2     3      4
a x*y, x*y, x*y, x*y
b x*y, x*y, x*y, x*y
c x*y, x*y, x*y, x*y
d x*y, x*y, x*y, x*y

*

High Precision Maths Solve : { 16Bit, 32Bit, 64Bit & so forth } :

Create table ARC, SIN,TAN, Size = Multiples of 4 or rather 2x2, Or 8 or 4x4

Values (cos(b), sin(b)) = x

tan(T) = y

Example:

Values (cos(b), sin(b)) = x * y = tan(T)


     1      2     3     4
a x*y, x*y, x*y, x*y
b x*y, x*y, x*y, x*y
c x*y, x*y, x*y, x*y
d x*y, x*y, x*y, x*y

Parallel rows shall be sorted (SiMD)

Values of {A,B,C,D}:1, {A,B,C,D}:2, {A,B,C,D}:3, {A,B,C,D}:4,

Sort by atomic High Accuracy RTC (timer) ECC

The table shall be sorted by a given gradient, Ellipse,

The rules shall be:

Cache the ellipses,

Form the ellipses into a elliptic curve,

Reduce the curve to a set of maths formula,

Map the curves for dimensions over time,

Curve definition precision steps :

Reduce the curve to a higher state logical maximum cap : { 16Bit, 32Bit, 64Bit & so forth } per tick / Second

Specify a bit depth for the expansion of the curve : { 16Bit, 32Bit, 64Bit & so forth } per tick / Second

Send a reciprocal curve per..: second, Per negotiated time period, Per group

*****

New table #Formulae 08:51 29/10/2024


arc sin tan table , useful for clocks!, Well anyway Maths

Python

import numpy as np

# Create angles from 0 to 90 degrees in steps of 10 degrees
angles = np.arange(0, 91, 10)

# Calculate sine and tangent of each angle
sine_values = np.sin(np.radians(angles))
tan_values = np.tan(np.radians(angles))

# Create the table header
table_header = "{:10s} {:10s} {:10s}".format("Angle", "Sin", "Tan")

# Create the table rows using string formatting
table_rows = []
for angle, sine, tangent in zip(angles, sine_values, tan_values):
table_rows.append("{:10d} {:10.4f} {:10.4f}".format(angle, sine, tangent))

# Combine the header and rows into a table string
table_string = "\n".join([table_header] + table_rows)

# Print the arc sin tan table
print(table_string)

// (c)Rupert S

https://is.gd/ECH_TLS

*****

ID-Matrix-dev-random - AnonCRT - Generating public keys involving matrix operations
https://is.gd/MatrixGenID

In this example a Matrix M² is used with dev/random to develop a certificate ID of anonymous nature..

The common attribute is that dev/random & attached data are used to generate a key ID, Personal & Server,

Usage such as CC cards, ID & Radio or mobile data & wifi..

The principles of the cert chain!

RS

https://is.gd/ECH_TLS

*****

ASCON may be right for you, If you are in IOT & can barely breath on 33mhz https://is.gd/DictionarySortJS

PSK, ML-KEM, AES
https://is.gd/ECH_TLS
https://is.gd/KeyBitSecurity
https://is.gd/AES_Strengths

https://science.n-helix.com/2022/03/ice-ssrtp.html

https://science.n-helix.com/2024/10/ecc.html

https://science.n-helix.com/2024/10/tls.html

*****

Machine Learning


https://science.n-helix.com/2022/10/ml.html

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html

Accelerated Python: NPU, TPU, SiMD

https://is.gd/CoralAI
https://is.gd/TPU_Inference
https://is.gd/TPU_Inference2

https://is.gd/DictionarySortJS
https://is.gd/UpscaleWinDL
https://is.gd/TFLiteDev
https://is.gd/TFLiteDevP2

https://is.gd/HPC_HIP_CUDA
https://is.gd/SPIRV_HIPcuda

https://is.gd/UpscalerUSB_ROM

https://is.gd/OpenStreamingCodecs

https://is.gd/AMDPro2024PolarisCombined

The perfect Proposal RS

Friday, September 27, 2024

XRay Scan

XRay Scan, A Space bird related story, With a scientific point 05:41 27/09/2024


When I initially was in the tube & through to the lasers examining the earth, mud & plant samples...
Lasers here are directed at samples floating in air! Now as i said this involves..

Anti Gravity &
Directed Red lasers ( presumably unknown forms of scanning )
Possibly heat dissipated matter; AKA the gas examination we use today with a mass spectrometer!

Now I have frequented hospitals before & do occasionally visit a kid with leukaemia & other illnesses...

In actual fact I can perform several miracles ...

The miracle of desperate thought & observation
The miracle of caring
The miracle of positive thinking; Staff need hope & energy
The miracle of group thinking!

& Finally all the prayer worship & magical gift stuff that HP, God, Odin Father, Loki, Thor & Christ & myself are proud of!

But practicality! Right Because I was not born a faith healer, I was born a cynical scientist student...

However God saw to me being an avid believer; Saving me from drowning under 3m of water when I couldn't swim & nobody was there to save me but him.

But you know from the heart that you may have the same cynicism as an atheist & know that hurts because Gods don't help a selfish C****

Directed radiation, XRays & radio require aiming specifically directed intense radiation..

Directly at large clusters of CELLS (Cancers)

Direct radio does a lot of damage!

However Cancer has one big weakness & that is Radio does a lot more damage to cancer cells generally than normal cells!

Reasons include:

Water content

Constant cellular division

less firm cell walls

The lack of genetic correction

Error multiplication

Energy demand, AKA cancer cells are energy greedy!

Nutrition demands (cell replication, resource burning, Lack of vein presence in dense clusters, Salt & iodine)

Directed pulse XRay & Radio & even needle injected alpha beta decay...

Directed amplification Laser Array

Damage is our hope!

Rupert S

https://is.gd/DictionarySortJS

https://kindpassion.n-helix.com/2024/09/bird.html

Monday, September 2, 2024

Laser TV

Laser TV 13:13 02/09/2024 (c) Rupert S

Laser TV : Now to put a point very important, if the firmware stalls the Laser points directly at the viewer at full power!

Even a 1 Watt laser can potentially blind someone.

Refraction is our friend, no one is looking directly at a laser, When static because of chip or firmware failure or assassination.

Several means of protecting the client exist

The basic formula is to use microchip lasers & LED that are specifically made to point,
Basically LED that point are not specifically infinite! But we certainly don't- care; Do we haha

Basic diode laser with a vibrating glass lens inside a solenoid.

The laser or LED is formulated to point in directions & move so that it forms shapes on a surface or piece of opaque material.

The opaque material & materials like steam, Can be the focus of local re emits & non direct reflection & We can use a material that holds & re emits light,

Example materials include glow in the dark paint (light reception material)

Lasers can also point at radiant opaque but reflective surfaces..

Surface paints that blur on refraction so that re emits in a curve provide more viewer angles,

Lensed surfaces provide good refraction potential, With glossy white & mirror inside them & sub lens Black LED such as found in calculators for over 20 years.

Representetive of optic lenses with reflective material backing

[ ][ ][ ][ ][ ]
[ ][ ][ ][ ][ ]
[ ][ ][ ][ ][ ]
[ ][ ][ ][ ][ ]
[ ][ ][ ][ ][ ]

The principle works for either forward firing reflector plates,
Layers on the wall on a mat reflective,
Refractive lensing you fire through.

Angle calculations make an intentional decision to be Square; So distortions are minimal,

Curved lenses provide circular motion the advantage, But distortion has to be calculated well..
The ( ) & the [ ] bracket lens both work; Overall Square lenses are cheaper to produce consistency from.
 
(c)RS

Genuinely good JS + Python & configuration work, Windows, Linux, ARM

ML tensor + ONNX Learner libraries & files
Model examples in models folder

https://is.gd/DictionarySortJS
https://is.gd/UpscaleWinDL
https://is.gd/HPC_HIP_CUDA

https://is.gd/UpscalerUSB_ROM

https://is.gd/OpenStreamingCodecs

https://is.gd/AMDPro2024PolarisCombined

The perfect Proposal RS


Tuesday, March 26, 2024

GoFetch Security Exploit - Repair Security Fix (c)RS

GoFetch memory dependent prefetch exploits 01:15 26/03/2024 (c)RS

GoFetch Vulnerability:

Exploits DMPs present in certain processors (e.g., Apple Silicon, newer Intel) to leak sensitive information.

DMPs aim to improve performance by prefetching data the processor might need based on past access patterns.

Malicious actors can trick DMPs into prefetching data from memory locations they shouldn't access, revealing sensitive information like cryptographic keys.

If these analytics are unavailable, the exploit presumably fails.

*

How the Virus Works :

GoFetch memory dependent prefetch, exploits rely on exploiting performance boosting statistic logs,

Virus works by analysing High Precision Timers & the Runtime Analytics Process,
If those facts are unavailable.. Then the virus procedural analytics would not work!
Praise the quality of the analytics process!

Analyses data from High Precision Timers and Runtime Analytics Process.

These analytics likely reveal patterns in memory access that the virus exploits to trigger DMP behaviour and leak information.

If these analytics are unavailable, the exploit presumably fails.

Countermeasures

Restrict access to analytics data: Only certificate certified applications should access the data DMPs rely on.

Permissions: Similar to Android, keep performance data and timers private, requiring explicit permission for access.

Delayed delivery: False or delayed data might not be as effective but could slow down attackers.

Sandboxing: Isolate untrusted applications in a virtual machine (VM) to limit their ability to exploit the system & performance metrics & statistics.

That being said; I believe the virus works by analysing High Precision Timers & the Runtime Analytics Process,
If those facts are unavailable.. Then the virus procedural analytics would not work!

you can however praise the quality of the analytics process!

Rupert S

*

The thoughts to process:

One or Two Facts,

Facts worth noting about the statistics required to exploit the CPU internals:

One

Keep the statistics away from the non certified virus..
keep them Admin..

Two

Unshared performance statistics & timers; don't get processed!
keep the properties personal permissions like android.

Three

Lies about statistics are not allowed...
However delayed delivery affects little but a code developer...

Four,

Applications have to have been trusted to gain statistics

You can contain the bug with analytic observation of the data query and if no permission is granted...

Boot them to VM virtual "reality" aka delayed and a fabrication of certainty.

GOD Loves you...
Jahova

RS

That being said; I believe the virus works by analysing High Precision Timers & the Runtime Analytics Process,
If those facts are unavailable.. Then the virus procedural analytics would not work!

you can however praise the quality of the analytics process! haha

Rupert S