Tuesday, November 30, 2021

MultiBit Serial & Parallel execution conversion inline of N*Bit -+

Multi Bit load operations for bitmap,Texture & Other tasks +ON+HighLowOP (c)RS

May take higher or lower bit depth & precisions: Rupert S 2021

2 16 Bit loads is 32Bit but takes 2 cycles...

16 Bit loads with 32 Bit Stores & Math unit:

Operation 1

16Bit , 16Bit , 16Bit , 16Bit Operation
\ / \ /

Inline Store

32Bit Store 32Bit Store
64Bit Store
\ /

32Bit ADD/DIV x 2 or 64Bit ADD/DIV x1

Operation 2

32Bit ADD/DIV x 2 or 64Bit ADD/DIV x1
\ /

4x 16Bit Store

4 x 16Bit Operation

MultiBit Serial & Parallel execution conversion inline of N*Bit -+

In the case of ADD -+ Signed for example:(c)RS
Plus & - Lines ADD or Subtract (Signed, Bit Depth Irrelevant)

Multiples of 16Bit works in place of 32Bit or 64Bit

V1: 16Bit Values composing a total 128Bit number
V2: 16Bit Values composing a total 128Bit number - (Value less than V1)
V3: Result

NBit: Bit Depth

4x16Bit operations in the same cycle >

If Value = 16Bit = Store
If Value = V3=Bit = Store * NBit

Stored 128Bit RAM or if remainder = less > 4x16Bit -1-1-1 ; 16Bit Value Store

RS https://bit.ly/DJ_EQ


*RAND OP Ubuntu


(Rn1 *<>/ Rn2 *<>/ Rn3)

VAR(+-) Var = Rn1 +- Rn8

(Rn5 *<>/ Rn6 *<>/ Rn7)

4 Samples over N * Sample 1 to 4

Input into pool 1 Low half -+
Input into pool 1 High half -+

*RAND OP Recycle It




On the subject of how deep a personality of 4Bit, 8Bit, 16Bit is reference:

Sunday, November 21, 2021

MontiCarlo Workload Selector

Cash_Bo_Montin Selector (c)Rupert S for Cache & System Operations Optimisation & Compute

CBoMontin Processor Scheduler - Good for consoles & RT Kernels (For HTTP+JS HyperThreading)


Cache Loaded Runtime : CLR

OpenCL JIT Compiler inclusion as main loadable object compiler,

Mainly because when CBoMontin Processor Scheduler is intending to run in cache; We need to optimise the scheduler for each Processor Cache size & depth,

Ordering instructions from inside the Processor Cache required optimised code; We create our task list interfaces (UDP & TCP Port approximates) inside the cache..

We prefetch our workloads from kernel space & user space & order them into our processor workflows,

The main process polls priority & nice values for each task & can select the processing order..

We would be prioritising the tasks onto the same processor as the parent task if those tasks are in the same application..

For that we would have to know if the task requires out of order execution or in order; Tasks such as video rendering can afford to have Audio & Video on two threads; However time stamps will be required to be precise!

The actual Selector is compiled optimally based on:

Processor Cache size

Instruction cache size
Data cache size
Processor Thread count

Available task queues
Optimal Queue Size
Optimal Task size

Priority sort based on applied function groups combined with optimised processor selection,
Processor function optimisations
Processor Features list & preference sorting optimisation

Preferred thread & processor for sustained & fast function & reduced processor to processor transfers..

From that we compile our Cache Loaded Runtime & optimise our Processor, Process & priority.


QoS To Optimise the routing: Task Management To optimise the process

Transparent Task Sharing Protocols


Monticarlo Workload Selector

CPU, GPU, APU, SPU, ROM, Kernel & Operating system :

CPU/GPU/Chip/Kernel Cache & Thread Work Operations management

In/out Memory operations & CU feature selection are ordered into groups based on:

CU Selection is preferred by Chip features used by code & Cache in-lining in the same group.

Global Use (In application or common DLL) Group Core CU
Localised Thread group, Sub prioritised to Sub CU in location of work use
Prioritised to local CU with Chip feature available & with lower utilisation (lowers latency)

{ Monticarlos In/Out }
System input load Predictable Statistic analysis }
Monticarlo Assumed averages per task }
System: IO, IRQ, DMA, Data Motion }

{ Process by Advantage }
{ Process By Task FeatureSet }
{ Process by time & Tick & Clock Cycle: Estimates }
{ Monticarlos Out/In }

Random task & workload optimiser ,
Task & Workload Assignment Requestor,
Pointer Allocator,
Cache RAM Allocation System.

Multithreaded pointer Cache Object tasks & management.

{SEV_TDL_TDX Kernel Interaction mount point: Input & Output by SSL Code Class}:
{Code Runtime Classification & Arch:Feature & Location Store: Kernel System Interaction Cache Flow Buffer}

Based upon the fact that you can input Monti Carlos Semi Random Ordered work loads into the core process:

*Core Process Instruction*

CPU, Cache, Light memory load job selector
Resident in Cache L3 for 256KB+- Cache list + Code 4Kb L2 with list access to L3

L2:L3 <> L1 Data + Instruction


(c)RS 12:00 to 14:00 Haptic & 3D Audio : Group Cluster Thread SPU:GPU CU

Merge = "GPU+CPU SiMD" 3D Wave (Audio 93% * Haptic 7%)

Grouping selector
3D Wave selector

Group Property value A = Audio S=Sound G=Geometry V=Video H=Haptic B=Both BH=BothHaptic

CPU Int : ID+ (group of)"ASGVH"

Float ops FPU Light localised positioning 8 thread

Shader ID + Group 16 Blocks
SiMD/AVX Big Group 2 Cycle
GPU CU / Audio CU (Localised grouping MultiThreads)



Task & Workload Assignment Requestor : Memory & Power

We have to bear in mind power requirements & task persistence in the :Task & Workload Assignment Requestor

knowledge of the operating systems requirements:
Latency list in groups { high processor load requirements > Low processor load requirements } : { latency Estimates }
Ram load , Store & clear {high burst : 2ns < 15ns } GB/s Ordered
Ram load , Store & clear {high burst : 5ns < 20ns } MB/s Disordered

GPU Ram load , Store & clear {high burst : 2ns < 15ns } GB/s Ordered
AUDIO Ram load , Store & clear {high burst : 1ns < 15ns } MB/s Disordered

AUDIO Ram load , Store & clear {high burst : 1ns < 15ns } MB/s Ordered
AUDIO Ram load , Store & clear {high burst : 1ns < 15ns } KB/s Disordered

Network load , Send & Receive {Medium burst : 2ns < 15ns } GB/s Ordered
Network load , Send & Receive {high burst : 1ns < 20ns } MB/s Disordered
Hard drive management & storage {medium : 15ns SSD < 40ns HDD}


Also Good for disassociated Asymmetric cores; Since these pose a significant challenge to most software,
However categorising by Processor function yields remarkable classification abilities:

Processor Advanced Instruction set
Core speed

Location in association with a group of baton passing & interthread messaging & cache,
Symmetry classed processes & threads.


Bo-Montin Workload Compute :&: Hardware Accelerated Audio : 3D Audio Dolby NR & DTS

Hardware Accelerated Audio : 3D Audio Dolby NR & DTS : Project Acoustics : Strangely enough ....
Be more positive about Audio Block : Dolby & DTS will use it & thereby in games!

Workload Compute : Where you optimise workload lists though SiMD Maths to HASH subtasks into new GPU workloads,

Simply utilize Direct ML to anticipate future motion vectors (As with video)

OpenCL & Direct Compute : Lists & Compute RAM Loads and Shaders to load...

DMA & Reversed DMA (From GPU to & from RAM)
ReBAR to vector compressed textures without intervention of one processor or another...

Compression Block :
KRAKEN & BC Compression & Decompression
SiMD Direct Compressed Load using the Cache Block per SiMD Work Group.

Shaders Optimised & compiled in FPU & SiMD Code form for GPU: Compiling Methods:

In advance load & compile : BRT : Before Runtime Time : task load optimised & ordered Task Executor : Bo-Montin Scheduler

GPU SiMD & FPU (micro 128KB Block encoder : decoder : compiler)
CPU SiMD & FPU (micro 128KB Block encoder : decoder : compiler)

JIT : Just in Time task load optimised & ordered Task Executor : Bo-Montin Scheduler

load & compile :

GPU SiMD & FPU (micro 128KB Block encoder : decoder : compiler)
CPU SiMD & FPU (micro 128KB Block encoder : decoder : compiler)


Task manager opportunistically &or Systematic Resource Allocation (c)RS

We also need a direct transport tunnel for data between GPU of different types,

Firstly my experience is as follows:

I have a RX280x & RX560 & Intel® Movidius™ Neural Compute SDK Python API v2 & both do Python work! When I have this configuration the RX280x is barely used unless clearly utilized independently!

The Task manager & Python needs to directly transfer workloads a processor tasks between each system processor,

Not limited to the primary Processor (4Ghz FX8320E) & the AVX supporting Movidius & to & from the RX280 & RX560, Both however supported direct Video rendering & Encoding though DX12,

However the RX6500 does not directly support the AMD Hardware Encode under DX12.1 (New Version 2022-04-21)

& That RX560 comes in handy! if the Video rendering work is directly transferred to RX560 or RX280x & Encoded there!

Therefore I clearly see 2 examples.. & there are more!

Clearly Movidius is advantaged for scaler work on behalf of the Python process & in addition the Upscaling RSR & Dynamic Resolution; We do however need directly to have the Task manager opportunistically or systematically plan the use of resources & Even the processor could offload AVX Work.

No-one has this planned & We DO.


PM-QoS - Processor Model QoS Tree for TCP, UDP & QUICC

The Method of PM-QoS Roleplayed in a way that Firmware & CPU Prefetch ML Coders can understand.


Multiple Busses &or Processor Features in an Open Compute environment with competitive task scheduling

[Task Scheduler] Monticarlo-Workload-Selector

We prioritise data traffic by importance & Need to ensure that all CPU Functions are used...

In the case of a Chiplet GPU We need to assign function groups to CU & QoS is used to asses available Multiple BUSS Capacities over competing merits,
[Merits : Buss Data Capacity, Buss Cycles, Available Features, Function Endpoint]

PM-QoS is a way of Prioritising Buss traffic to processor functions & RAM & Storage Busses that:

States a data array such as:

Buss Width

divisibility ((Example) Where you transform a 128Bit buss into 32Bit x 4 Data motions and synchronize the transfers,

Data Transfer Cycles Available

Used Data Rate / Total Data Throughput Rate = N

(c)Rupert S https://science.n-helix.com

Kernel Computation Resources Management :

OpenCL, Direct Compute, Compute Shaders & MipMaps :

Optimisation of all system resource use & management 2022 HPC RS

On the matter of Asymmetric GPU / CPU configuration, As in when 2 GPU are not of the same Class or from different providers,

Such a situation is when the motherboard is NVidia & the GPU is AMD for example.

We need both to work, So how?

Firstly the kind of work matters: Operating System Managed Workload Scheduler : Open CL & Direct X as examples:

Firstly PCI 1+ has DMA Transfers of over 500MB/s so data transfer is not a problem,
Secondly DMA is card based; So a shader can transfer work.
Third the memory transfer can be compressed; Does not need to transition mainly though the CPU..
No Cache Issue; Same for Audio Bus

MipMaping is an example with a low PCI to PCI DMA Transfer cost,
But Shaders & OpenCL or Direct Compute are primary examples,
(Direct Compute & OpenCL workloads are cross compatible & convertible)

Exposing a systems potential does require that a DX11 card be utilized for MipMaps or Texture Storage & operations; Within the capacities of Direct 11, 12, 12.1 As and when compatible..

Optimisation of all system resource use & management 2022 HPC

Rupert S


Innate Smart Access (c)RS

The Smart-access features require 3 things:
[Innate Compression, Decompression, QoS To Optimise the routing, Task Management To optimise the process] : Task Managed Transfer : DMA:PIO : Transparent Task Sharing Protocols

The following is the initiation of the Smart-access Age


QoS To Optimise the routing:Task Management To optimise the process

Transparent Task Sharing Protocols

Innate Compression, Decompression


EMS Leaf Allocations & Why we find them useful: (c)RS https://science.n-helix.com

Memory clear though page Voltage removal..

Systematic Cache randomisation flipping (On RAM Cache Directs syncobable (RAND Static, Lower quality RAND)(Why not DEV Write 8 x 16KB (Aligned Streams (2x) L2 CACHE Reasons)

Anyway in order to do this we Allocate Leaf Pages or Large Pages...
De Allocation invokes scrubbing or VOID Call in the case of a VM.

So in our case VT86 Instructions are quite useful in a Hypervisor;
&So Hypervisor from kernel = WIN!

(c)Rupert S Reference T Clear


Atomic: Add custom atomic.h implementation

Now we can use Statistic variance Atomic Counters inside loops with SivHASH 32Bit value hashes to add variances to dev/random & quite significantly increase motion in the pool,

But use Main thread interactions with average micro loops to reduce the overall HASH turnover rate..

Modification of the additional kind ADD's to the pre published value & additionally passes CPU Activity count numbers to the statistic pool; In the same loop main thread.

Rupert S

Atomics & Reference PID/TSC/LeafBlend

Atomics https://lkml.org/lkml/2022/4/12/84
RDPID https://lkml.org/lkml/2022/4/12/143
Opening Time Security Layering Reference PID with RDPID LeafHASH


If you could "Decode" Win DLL & particularly the Compiler code, plug
in! you could use these on console :



High performance firmware:


More on HRTF 3D Audio

TERMINATOR Interview #Feeling https://www.youtube.com/watch?v=srksXVEkfAs & Yes you want that Conan to sound right in 3D HTRF

Cyberpunk 2077 HDR : THX, DTS, Dolby : Haptic response so clear you can feel the 3D SOUND



If we had a front door & a back door & we said that, "That door is only available exclusively to us "Someone would still want to use our code!
AES is good for one thing! Stopping Cyber Crime!
hod Save us from total anarchistic cynicism

Rupert S

  * This function will use the architecture-specific hardware random
- * number generator if it is available.  The arch-specific hw RNG will
- * almost certainly be faster than what we can do in software, but it
- * is impossible to verify that it is implemented securely (as
- * opposed, to, say, the AES encryption of a sequence number using a
- * key known by the NSA).  So it's useful if we need the speed, but
- * only if we're willing to trust the hardware manufacturer not to
- * have put in a back door.
- *
- * Return number of bytes filled in.
+ * number generator if it is available. It is not recommended for
+ * use. Use get_random_bytes() instead. It returns the number of
+ * bytes filled in.


RAND : Callback & spinlock

Callback & spinlock are not just linux : Best we hash &or Encrypt several sources (if we have them)
If we have a pure source of Random.. we like the purity! but 90% of the time we like to hash them all together & keep the quality & source integrally variable to improve complexity.
Rupert S

'function gets random data from the best available sourceThe current code has a sequence in several places that calls one or more of arch_get_random_long() or related functions, checks the return value(s) and on failure falls back to random_get_entropy().get_source long() is intended to replace all such sequences.This is better in several ways. In the fallback case it gives much more random output than random_get_entropy(). It never wasted effort by calling arch_get_random_long() et al. when the relevant config variables are not set. When it does usearch_get_random_long(), it does not deliver raw output from that function but masks it by mixing with stored random data.'

RAND : Callback & spinlock : Code Method

Spinlock IRQ Interrupted upon RAND Pool Transfer > Why not Use DMA Transfer & Memory Buffer Merge with SiMD : AVX Byte Swapping & Merge into present RAM Buffer or Future location with Memory location Fast Table.

Part of Bo-Montin Selector Code:

(CPU & Thread Synced & on same CPU)

(Thread 1 : cpu:1:2:3:4)
(Buffer 1) > SiMD cache & Function :

(Thread 2 : cpu:1:2:3:4)
(Memory Location Table : EMS:XMS:32Bit:64Bit)
(Selection Buffer & Transfer)

(Buffer 1) (Buffer 2) (Buffer 3)
(Entropy Sample : DieHARD : Small)

Rupert S


Random Initiator : Linus' 50ee7529ec45

Linus' 50ee7529ec45 ("random: try to actively add entropy
rather than passively wait for it"), the RNG does a haveged-style jitter
dance around the scheduler, in order to produce entropy

The key is to initialize with a SEED key; To avoid the seed needing to be replaced too often we Encipher it in a set order with an additive key..

to create the perfect circumstances we utilize 2 seeds:

Initiator math key CH1:8Bit to 32Bit High quality HASH Cryptic
& Key 2 CrH

8Bit to 256Bit : Stored HASH Cryptic

We operate maths on the differential and Crypro the HASH :
CrH 'Math' CH1(1,2,3>)

AES/SHA2/PolyCHA > Save to /dev/random & use

We may also use the code directly to do unique HASH RAND & therefore keep crucial details personal or per application & MultiThreads &or CPU & GPU & Task.

Rupert S

(Spectra & Repoline Ablation) PreFETCH Statistical Load Adaptive CPU Optimising Task Manager ML(c)RS 2022

Come to think of it, Light encryption 'In State' may be possible in the Cache L3 (the main problem with repoline) & L2 (secondary) : How?

PFIO_Pol & GPIO Combined with PSLAC TaskManager (CBo_Montin) Processor, Kernel, UserSpace.
Byte Swapping for example or 16b instruction, If a lightly used instruction is used
(one that is under utilized)
Other XOR SiMD instructions can potentially be used to pre load L2 & L1 Instruction & Data.

Spectra & Repoline 1% CPU Hit : 75% improved Security : ALL CPU v& GPU Processor Type Compatible.

In Terms of passwords & SSL Certificate loads only, The Coding would take 20Minutes & consume only 0.1% of total CPU Time.

Also Good for disassociated Asymmetric cores; Since these pose a significant challenge to most software,
However categorising by Processor function yields remarkable classification abilities:

Processor Advanced Instruction set
Core speed

Location in association with a group of baton passing & interthread messaging & cache,
Symmetry classed processes & threads.

HASH Example


In reference to : https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

CPU Statistical load debug 128 Thread :

PFIO_Pol Generic Processor Function IO & Feature Statistics polling + CPUFunctionClass.h + VCache Memory Table Secure HASH

Also Good for disassociated Asymmetric cores; Since these pose a significant challenge to most software,
However categorising by Processor function yields remarkable classification abilities:

Processor Advanced Instruction set
Core speed

Location in association with a group of baton passing & interthread messaging & cache,
Symmetry classed processes & threads.

GPIO: Simple logic analyzer using polling : Prefer = Precise Core VClock + GPIO + Processor Function IO & Feature Statistics polling


Wednesday, November 17, 2021

iHM_TES - Interpretive Haptic Motion Time expression Sense-8é: iHM_TES: (c)RS

Interpretive Haptic Motion Time expression Sense-8é: iHM_TES: (c)RS

1 Introduce 3D Audio containerised packet for haptic,
2 Simplification of technique to allow WebAPI,
3 Meta Data for interaction use (Adaptation of geometry, Sound & feedback loop)
4 Backported API : Interaction is a packet; Not a form of MP3 or AAC or H264, H265, VP9, VVC
5 Interpreted loosely (Common goal, Many thiems.
6 Smell, Taste, Sound, Feel, Interaction, Choice : 5 Senses? Why not "Sense"ation 8
7 You can feel it, Taste it & Know what it thinks, How it's heart pulses.. Sense' At (E)ions
8 Properties in the bitstream notify Audio & Video & Expressions of Sense to the meaning to be transferred & meant. the Sense-ATE Property Packet is flexible & multiple endpoint.
9 Transference one expression of experience into another, Convoluted networks transfer one sense into another.
10 Meshes Sense(tm) Combined low latency packets merge sense expression into one cohesive low latency experience by notifying your BT, HDMI, Audio, AMP & TV of the TIME & Sync of each play or motion or move.

(Haptic Is a 3D Sound Waveform of 3D Geometry) ,
Can be visual but not guaranteed to need that complication So:

SBC, AAC, AptX prove virtually indistinct from, Visual waveform geometry Profiled haptic.

Both methods work with localised packet container format..

Game Database loaded waveforms.

Game geometry in the form of waves:


Rupert Summerskill 2021






MPEG Standardisation of haptic feedback: 2 missions: SDK + Client Build + Size & Latency. (c)RS


Saturday, November 13, 2021


Sound-focusing & Wave-Focus-ANC & WF_AnANC (c)RS

Sound Violation & Noise + Digital + Electronic noise reduction in harmonic failure.

Applicable to HDMI, VESA, Bluetooth, Radio, DAB Radio & TV, WIFI & all energy technology though licenced technology (c)RS

By applying wave sampling to waveforms & compression waveforms (Wavelets) we can either
Subtract or add to the wave, By applying Noise suppression or noise shaping or noise boosting..

To the electronic, Light or energy or Data, Image or audio we can shape that wave so that the value displayed or utilised is:


Dr ANC Table: Applies to:

File compression
File Accuracy
Noise levels
Power & amplification

Sensors &+ Noise
Sharpening & Enhancing
Processing, Isolating or Extrapolating Data
Video process
Audio Process
Data Process


More or less

Uniform or ordered
Cleaner or Original
Unique or the Same as the Master


Anti +- Wave-Focus-ANC : ANC Applied to invert frequencies in:RS

NE Noise Enhancement }for a purpose
NR Noise Reduction }
Shaping & Direction }
Sharpening & Enhancing }
Isolating or extrapolating Data }
Resultant Manipulation }
Resultant Clarification or Simplification }

Speakers & Display Systems : TV, Monitor, VR, Motion sensation & Haptic Feedback
Sensors & Camera or Video & motion etcetera
Signal &+- noise data with statistical & or dynamic data
Mechanical motion enhancement
Mechanical vibration
Electrical noise & Static
Cars & Aeroplanes & space ships
Fan blades

Application of a static vibrator (Physical, Electrical, Energy & force)
For common noise reduction or enhancement or filter..
Beside the application though automatic reduction such as:

Static foam
Metal & polymer & Resin

Component for common vibration of a statistically normalize level & Dynamic NR + Dynamic NE

To direct sound through computational variance of sound wave profile so that it varies or vibrates the cone in different ways to reflect:

A 3 Dimensional shape over the cone that will reproduce a sound varied over a 3D space such as an eardrum or ear tunnel or a room..

Or otherwise shape sound though ANC Noise Cancelling calculation Sin, Cos & Tan Waves varied over time to modulate audio or filter Audio

To Shape audio and enhance it though Inverted ANC & thus subtly or greatly boost & direct audio in subtle ways that reflect across surfaces & angles ...

Both to boost waves in the Sense of EQ or to enhance or modify measured Fidelity of a speaker or relay:

Examples of inverted &+ ANC:

Electric cables carry noise (Remove it) or use noise to enhance audio boosting.
(principally like jiu Jitsu: To use momentum to advantage)

To shape waves & to make clean & precise, Sharp, Angular or otherwise shape.

In AMP's, Power converters, Cables and other energy systems such as:
Cameras, Lenses, Lasers, Emitters & receivers.

Image systems, Sensors & File save formats & HDD, SSD..
Application in principle enhances or destroys or shapes noise..
As we know Noise shaping also involves wavelets:

Both applicable second layer modifiers +-
& Wave co-modifiers.

(JPG & ALAC, AAC & SBC + Other file compression systems)

Enhancement, Sharpening & improvements..
Quality, Colour, Sound, Energy, Waveforms.

(c)Rupert S

Combined with:

Thursday, November 4, 2021

*Expand Formula* SonaRuS : Form & Shape - Codec Wavelet Complimentary cross conversion (c)RS 2021

Form & Shape - Codec Wavelet Complimentary cross conversion (c)RS 2021

Full support on all Hardware architectures & platforms + CPU & GPU.
Full support on all Bluetooth Devices, HDMI Devices, S/PDIF & TOSLink Devices.

Though Hardware Accelerated Conversion & Enhancement or otherwise optimisation for Data Bandwidth & Quality of content; QoS

More like most GPU in the NVidia & AMD (& qualcomm & ARM) lineup,I really need both of you to support : SBC, AAC, LC3 & AptX as potential HDMI connection options.

You see as you know, largely upscaled MP3 & MP4 Content barely benefact;
From Conversion to a final PCM, Maybe LPCM?

But benefact massively from cross conversion into an upscaled form of the same codec type!

They also benefit from quick low latency conversion with the same WAVE Shapes (Wavelets)..
Scaled to higher precision.

principally in audio analogue from digital convergence; higher precision output from compressed waves command the following:

Audio compression & expansion formula :

*Expand Formula* SonaRuS

D = Distance
T = Time period

X = (Angle X Over D) / T
Y = (Angle X Over D) / T²

Expand = (D/T) * (D/T²)


(CoSin X) = (CoSin Y) * Expand | Replace


(CoSin Y) * Expand = (CoSin X) | Replace

(c)Rupert S