Sunday, January 2, 2022

SBTM: Source Based Tone mapping : (c)RS

SBTM: Source Based Tone mapping : (c)RS


Feel source based may be good if one knows the ICC Profiles of the display

Definite reason : Desktop windows with multiple ICC Colour pallets Dynamic Colour Range.

Example: We take desktop of Mac, PC, Phone & Define Master ICC Profile Frame buffer, We take small windows & Define a colour gamut & WCG & Dynamic range..

Within a frame buffer of 48Bit by blitting + DMA from a Sub Cache Multiple window with multiple memory frames that are meta data enabled small memory footprint & Multiprocessing & Threading enabled.

We can therefor with VRR load a single part of the frame; On a 8 Core CPU with AVX & SiMD quite relevant!

We can single frame tone-map from source & from TV to advantage total transmittable data (Only 48GB/s!)

Screen composure: Multi Window HDR and Variable pallet screen composure for monitors,
Computers & TV & GPU DAC


2 primary methods:

Full Light spectrum Composure frame with Palleted Virtual boxed renders
(can be 8Bit, 15Bit, 16Bit, 24Bit, 32Bit, 40Bit, 48Bit)
With HSL Light to dark (Composed on Dark, Alpha)
with Light spectrum in 8Bit, 10Bit, 12Bit, 14Bit, 16Bit per channel.

Frame buffer source can be Meta Data by spectrum & Light range :
HSL L<>D 8Bit <> 16Bit (Alpha Channel) + R,G,B 8Bit <> 16Bit per channel
+ Frame rendered in colour space..

& Dither down on DAC

Or Render all frames in the same Bit Depth & compose in linear colour space 48Bit & Dither down on DAC

Composure into high colour space saves RAM but does allow the final composure layer to dither into a higher colour range 48Bit for example with 16Bit colour pallets in the frame buffer.

Rupert S https://science.n-helix.com

https://bit.ly/VESA_BT

Colour Range Example Final Fantasy XVI HDR

https://www.youtube.com/watch?v=vFtqbjf1jjI

Cut Negativity but answers are to be found!

Monday, December 27, 2021

3D Audio Plugin : Console, PC, Mac, IOS, Android, Linux

Any Console Game Buffered VR Surround & 3D Audio Game development 2021 (c)RS

3D Audio Plugin : Console, PC, Mac, IOS, Android, Linux

Remember all the game needs is a virtual Output buffer before send to console audio output

7.1.2 output emulation layer, Is simply to write 7.1.2 Audio buffer before output & simply virtualise to any output buffer mode the console or Astro's or Creative logic USB or TV Has in E-AC3 Dolby Atmos

Yes 7.1.2 channel Audio Cache in Game SDK & Firmware is entirely possible,

Output Processing is unique to Game cartridge & E-AC3 & E-AC4 Processing Plugin Codec

To clarify : If the game buffers a 7.1.2 channel profile, Any output Audio Firmware is Compatible with 3D sound, Even Stereo E-AC3 Dolby

Any Firmware or Bios or GPU can accomplish this with willpower,

To any 3D Format:

Creative 3D EAX
Dolby
DTS
THX

Buffer & Plugin SDK Codec

Any game can process 7.1.2 audio into stereo with Virtual Dolby plugin in the source code of console

If a company:

Soundblaster : Creative logic
Dolby Atmos
DTS
THX

Create plugins for your GAME SDK

E-AC3 & E-AC4 & DTS

*

My version of AmbiSonics 3D Audio &+ Virtual Surround imbedded into the channels if required (Not always) (c)RS

Basically you can stream to server in 5.1 HQ & convert into ambersonics 2.1 
(Joint stereo with 3 way conversion, 
Which essentially means up to 7.3.3 Channel arrangement,

You can use more than 5 channels & subchannels,
But planning for Bluetooth means 2 joint Stereo channels per earbud,
Stereo headphones & Bluetooth headphones commonly utilise a Single Channel stream..
Joint 2.1 : 2 Channel & 1 Joined optimised for fidelity & speaker arrangement.

Theoretical if you use 2 joint stereo channels,
With Centric joint channel being : High, Low, Centric &or+ BASS

E-AC3 & E-AC4 & DTS & AAC & OPUS are most likely to work here

https://science.n-helix.com/2021/10/eccd-vr-3datmos-enhanced-codec.html

*

Amber Sonics (3D Spatial Averaging network Depth N to N8) & Spatial Audio : Unreal Engine Demonstration : THX, DTS, Dolby : Personalised HRTF Ear Profile ML Servers

AmbiSonic 360.. Float 32-bit Opus Codec
Progress in AmbiSonics 3D Audio Surround : New : AmbiSonic 360.. These guys are playing for your Samui game : 360 TokniOKI Blade!

Traditional Asiatic music (Calypso style) in at a very minimum; Very high quality Stereo > 
Channel layout Opus 32Bit float QUAD with a beautiful high quality sound; Video is basic..
Suitable for assessment & enjoyment.


Cyberpunk 2077 HDR : THX, DTS, Dolby : Haptic response so clear you can feel the 3D SOUND

Apex Legends : THX, DTS, Dolby

TERMINATOR Interview #Feeling https://www.youtube.com/watch?v=srksXVEkfAs & Yes you want that Conan to sound right in 3D HTRF

https://www.youtube.com/watch?v=d1OBJP7VcJs

Best Game Graphics of 2021 - PC, Xbox, PlayStation

Tuesday, November 30, 2021

MultiBit Serial & Parallel execution conversion inline of N*Bit -+

Multi Bit load operations for bitmap,Texture & Other tasks +ON+HighLowOP (c)RS

May take higher or lower bit depth & precisions: Rupert S 2021

2 16 Bit loads is 32Bit but takes 2 cycles...

16 Bit loads with 32 Bit Stores & Math unit:

Operation 1

16Bit , 16Bit , 16Bit , 16Bit Operation
\ / \ /

Inline Store

32Bit Store 32Bit Store
64Bit Store
\ /

32Bit ADD/DIV x 2 or 64Bit ADD/DIV x1

Operation 2

32Bit ADD/DIV x 2 or 64Bit ADD/DIV x1
\ /

4x 16Bit Store

4 x 16Bit Operation

MultiBit Serial & Parallel execution conversion inline of N*Bit -+

In the case of ADD -+ Signed for example:(c)RS
Plus & - Lines ADD or Subtract (Signed, Bit Depth Irrelevant)

Multiples of 16Bit works in place of 32Bit or 64Bit

V1: 16Bit Values composing a total 128Bit number
V2: 16Bit Values composing a total 128Bit number - (Value less than V1)
V3: Result

NBit: Bit Depth

4x16Bit operations in the same cycle >

If Value = 16Bit = Store
If Value = V3=Bit = Store * NBit

Stored 128Bit RAM or if remainder = less > 4x16Bit -1-1-1 ; 16Bit Value Store

RS https://bit.ly/DJ_EQ

*

*RAND OP Ubuntu

https://pollinate.n-helix.com/

(Rn1 *<>/ Rn2 *<>/ Rn3)

-+
VAR(+-) Var = Rn1 +- Rn8

(Rn5 *<>/ Rn6 *<>/ Rn7)

4 Samples over N * Sample 1 to 4

Input into pool 1 Low half -+
Input into pool 1 High half -+

*RAND OP Recycle It

RS
*

https://science.n-helix.com/2021/11/parallel-execution.html
https://science.n-helix.com/2021/11/monticarlo-workload-selector.html

Sunday, November 21, 2021

MontiCarlo Workload Selector

Cash_Bo_Montin Selector (c)Rupert S for Cache & System Operations Optimisation & Compute

CBoMontin Processor Scheduler - Good for consoles & RT Kernels (For HTTP+JS HyperThreading)

*
Monticarlo Workload Selector

CPU, GPU, APU, SPU, ROM, Kernel & Operating system :

CPU/GPU/Chip/Kernel Cache & Thread Work Operations management

In/out Memory operations & CU feature selection are ordered into groups based on:

CU Selection is preferred by Chip features used by code & Cache in-lining in the same group.

Global Use (In application or common DLL) Group Core CU
Localised Thread group, Sub prioritised to Sub CU in location of work use
Prioritised to local CU with Chip feature available & with lower utilisation (lowers latency)

{ Monticarlos In/Out }
System input load Predictable Statistic analysis }
Monticarlo Assumed averages per task }
System: IO, IRQ, DMA, Data Motion }

{ Process by Advantage }
{ Process By Task FeatureSet }
{ Process by time & Tick & Clock Cycle: Estimates }
{ Monticarlos Out/In }

Random task & workload optimiser ,
Task & Workload Assignment Requestor,
Pointer Allocator,
Cache RAM Allocation System.

Multithreaded pointer Cache Object tasks & management.

{SEV_TDL_TDX Kernel Interaction mount point: Input & Output by SSL Code Class}:
{Code Runtime Classification & Arch:Feature & Location Store: Kernel System Interaction Cache Flow Buffer}
https://is.gd/SEV_SSLSecureCore
https://is.gd/SSL_DRM_CleanKernel
*

Based upon the fact that you can input Monti Carlos Semi Random Ordered work loads into the core process:

*Core Process Instruction*

CPU, Cache, Light memory load job selector
Resident in Cache L3 for 256KB+- Cache list + Code 4Kb L2 with list access to L3

L2:L3 <> L1 Data + Instruction

*formula*


(c)RS 12:00 to 14:00 Haptic & 3D Audio : Group Cluster Thread SPU:GPU CU

Merge = "GPU+CPU SiMD" 3D Wave (Audio 93% * Haptic 7%)

Grouping selector
3D Wave selector

Group Property value A = Audio S=Sound G=Geometry V=Video H=Haptic B=Both BH=BothHaptic

CPU Int : ID+ (group of)"ASGVH"

Float ops FPU Light localised positioning 8 thread

Shader ID + Group 16 Blocks
SiMD/AVX Big Group 2 Cycle
GPU CU / Audio CU (Localised grouping MultiThreads)

https://www.youtube.com/watch?v=cJkx-OLgLzo

If you could "Decode" Win DLL & particularly the Compiler code, plug
in! you could use these on console :

https://bit.ly/DJ_EQ
https://bit.ly/VESA_BT

https://www.youtube.com/watch?v=cJkx-OLgLzo

High performance firmware:



https://is.gd/SEV_SSLSecureCore
https://is.gd/SSL_DRM_CleanKernel



*
More on HRTF 3D Audio

TERMINATOR Interview #Feeling https://www.youtube.com/watch?v=srksXVEkfAs & Yes you want that Conan to sound right in 3D HTRF

Cyberpunk 2077 HDR : THX, DTS, Dolby : Haptic response so clear you can feel the 3D SOUND




*

Wednesday, November 17, 2021

iHM_TES - Interpretive Haptic Motion Time expression Sense-8é: iHM_TES: (c)RS

Interpretive Haptic Motion Time expression Sense-8é: iHM_TES: (c)RS

1 Introduce 3D Audio containerised packet for haptic,
2 Simplification of technique to allow WebAPI,
3 Meta Data for interaction use (Adaptation of geometry, Sound & feedback loop)
4 Backported API : Interaction is a packet; Not a form of MP3 or AAC or H264, H265, VP9, VVC
5 Interpreted loosely (Common goal, Many thiems.
6 Smell, Taste, Sound, Feel, Interaction, Choice : 5 Senses? Why not "Sense"ation 8
7 You can feel it, Taste it & Know what it thinks, How it's heart pulses.. Sense' At (E)ions
8 Properties in the bitstream notify Audio & Video & Expressions of Sense to the meaning to be transferred & meant. the Sense-ATE Property Packet is flexible & multiple endpoint.
9 Transference one expression of experience into another, Convoluted networks transfer one sense into another.
10 Meshes Sense(tm) Combined low latency packets merge sense expression into one cohesive low latency experience by notifying your BT, HDMI, Audio, AMP & TV of the TIME & Sync of each play or motion or move.


(Haptic Is a 3D Sound Waveform of 3D Geometry) ,
Can be visual but not guaranteed to need that complication So:

SBC, AAC, AptX prove virtually indistinct from, Visual waveform geometry Profiled haptic.

Both methods work with localised packet container format..

Game Database loaded waveforms.

Game geometry in the form of waves:

Simple
Colorful
Complex

Rupert Summerskill 2021

https://bit.ly/DJ_EQ

https://science.n-helix.com/2019/06/vulkan-stack.html

https://science.n-helix.com/2017/02/open-gaming.html

https://science.n-helix.com/2016/04/3d-desktop-virtualization.html

https://science.n-helix.com/2020/04/render.html

MPEG Standardisation of haptic feedback: 2 missions: SDK + Client Build + Size & Latency. (c)RS

https://www.marketscreener.com/quote/stock/IMMERSION-CORPORATION-9670/news/Immersion-MPEG-Standardization-is-a-Watershed-Moment-for-Haptics-37048471/

Saturday, November 13, 2021

Wave-Focus-ANC

Sound-focusing & Wave-Focus-ANC & WF_AnANC (c)RS

Sound Violation & Noise + Digital + Electronic noise reduction in harmonic failure.

Applicable to HDMI, VESA, Bluetooth, Radio, DAB Radio & TV, WIFI & all energy technology though licenced technology (c)RS

By applying wave sampling to waveforms & compression waveforms (Wavelets) we can either
Subtract or add to the wave, By applying Noise suppression or noise shaping or noise boosting..

To the electronic, Light or energy or Data, Image or audio we can shape that wave so that the value displayed or utilised is:

*

Dr ANC Table: Applies to:


Sound
Electronics
Light
LED
Laser
Processing
File compression
File Accuracy
Noise levels
Power & amplification

Sensors &+ Noise
Sharpening & Enhancing
Processing, Isolating or Extrapolating Data
Video process
Audio Process
Data Process

+

More or less

Accurate
Colourful
Sharper
Distinct
Uniform or ordered
Chaotic
Complex
Simple
Cleaner or Original
Unique or the Same as the Master

*

Anti +- Wave-Focus-ANC : ANC Applied to invert frequencies in:RS

NE Noise Enhancement }for a purpose
NR Noise Reduction }
Shaping & Direction }
Sharpening & Enhancing }
Isolating or extrapolating Data }
Resultant Manipulation }
Resultant Clarification or Simplification }

Speakers & Display Systems : TV, Monitor, VR, Motion sensation & Haptic Feedback
Sensors & Camera or Video & motion etcetera
Signal &+- noise data with statistical & or dynamic data
Motion
Rockets
Mechanical motion enhancement
Mechanical vibration
Electrical noise & Static
Cars & Aeroplanes & space ships
Fan blades
Motors

Application of a static vibrator (Physical, Electrical, Energy & force)
For common noise reduction or enhancement or filter..
Beside the application though automatic reduction such as:

Foam
Static foam
Metal & polymer & Resin

Component for common vibration of a statistically normalize level & Dynamic NR + Dynamic NE
*

To direct sound through computational variance of sound wave profile so that it varies or vibrates the cone in different ways to reflect:

A 3 Dimensional shape over the cone that will reproduce a sound varied over a 3D space such as an eardrum or ear tunnel or a room..

Or otherwise shape sound though ANC Noise Cancelling calculation Sin, Cos & Tan Waves varied over time to modulate audio or filter Audio

To Shape audio and enhance it though Inverted ANC & thus subtly or greatly boost & direct audio in subtle ways that reflect across surfaces & angles ...

Both to boost waves in the Sense of EQ or to enhance or modify measured Fidelity of a speaker or relay:

Examples of inverted &+ ANC:

Electric cables carry noise (Remove it) or use noise to enhance audio boosting.
(principally like jiu Jitsu: To use momentum to advantage)

To shape waves & to make clean & precise, Sharp, Angular or otherwise shape.

In AMP's, Power converters, Cables and other energy systems such as:
Cameras, Lenses, Lasers, Emitters & receivers.

Image systems, Sensors & File save formats & HDD, SSD..
Application in principle enhances or destroys or shapes noise..
As we know Noise shaping also involves wavelets:

Both applicable second layer modifiers +-
& Wave co-modifiers.

(JPG & ALAC, AAC & SBC + Other file compression systems)

Enhancement, Sharpening & improvements..
Quality, Colour, Sound, Energy, Waveforms.

(c)Rupert S

Combined with:
https://science.n-helix.com/2021/10/the-principle-of-inversion-sign-sign-crs.html
https://science.n-helix.com/2021/11/expand-formula-sonarus.html
https://science.n-helix.com/2021/09/temporal-aliasing-image-shaping-polygon.html
https://science.n-helix.com/2021/03/upscaling-enhancement.html

Thursday, November 4, 2021

*Expand Formula* SonaRuS : Form & Shape - Codec Wavelet Complimentary cross conversion (c)RS 2021

Form & Shape - Codec Wavelet Complimentary cross conversion (c)RS 2021


Full support on all Hardware architectures & platforms + CPU & GPU.
Full support on all Bluetooth Devices, HDMI Devices, S/PDIF & TOSLink Devices.

Though Hardware Accelerated Conversion & Enhancement or otherwise optimisation for Data Bandwidth & Quality of content; QoS

More like most GPU in the NVidia & AMD (& qualcomm & ARM) lineup,I really need both of you to support : SBC, AAC, LC3 & AptX as potential HDMI connection options.


You see as you know, largely upscaled MP3 & MP4 Content barely benefact;
From Conversion to a final PCM, Maybe LPCM?


But benefact massively from cross conversion into an upscaled form of the same codec type!

They also benefit from quick low latency conversion with the same WAVE Shapes (Wavelets)..
Scaled to higher precision.

principally in audio analogue from digital convergence; higher precision output from compressed waves command the following:

Audio compression & expansion formula :


*Expand Formula* SonaRuS


D = Distance
T = Time period

X = (Angle X Over D) / T
Y = (Angle X Over D) / T²

Expand = (D/T) * (D/T²)


*UP*


(CoSin X) = (CoSin Y) * Expand | Replace

*Down*


(CoSin Y) * Expand = (CoSin X) | Replace

(c)Rupert S

https://bit.ly/VESA_BT