Use open CL to trace , optimisation and pre-render, Light & Sound & effects such as force-fields,
By Intercepting occlusion in comparison to OpenCL Direct compute directives of force & motion & energy .. Direct compute (OpenGLES 3.1,Vulkan, Open GL & Direct X..
Direct Compute Open CL is able to ray trace anything from simple dynamic effects to bullet trace sound effects, With direct mapped effective & efficient Direct Compute OpenCL in 3 modes:
Interception real-time pre-render (Microseconds) with Spontaneous : Active CL (tm)
For other functions of reduced precision for the reduction of processing time,Memory or reduced latency.
Use of cone, AE Effects lower the CPU/AVX/GPU processor usage while maintaining effectiveness.
Library builds reduce development costs with Real-Time Engines.
What we need is a AVX,Vector,Nano,SiMD tessellation solve for a 15 point matrix from Source to face or point of interest and a dynamic vector box to shade in..
If we have 3 destinations that is ether 3 point 16 Tessellations or a total of 3 point 48 point tessellations per culled box or vector cube..
Ray traced : Light,Shadows and depth of field are all obtainable on high efficiency code on our last to latest generations of hardware, GPU:CPU & Vector processor.
Obtain the main trace and we can do micto contoured Commute/Compute shading with the extra resources & even add dynamic polygon count (With or without textures).
Rays have not died yet! Live long and prosper!
RS
VR-VMP-3D - Vector tables/SIMD/RayTracing/High Precision Float:
We can use CPU & GPU MipMap & Tessellation RiS with micro smoothing predictive tessellation with map fonts, We can also do colour maps and lut conversion for dynamic contrast & Sound for the Realtek Audio codec! We can do this for video also...
Light/Shade & Colour HDR Mapping & Polymorphic HDR 3D Sound; Texture emulation of feel,
Touch and sensation/Sound though Direct Compute Shaders & poly numeric maths.
Haptic 3D feeling/Sensation/Visuals/Sound & Audio for JS/script & code/Open CL/Direct Compute for 3D/Video/Internet HPC.
Sensational Virtual 3D Web/Video/Classic Video/Games/Audio/Fonts with haptic sensation and touch! All new JS ML code to make true sensation : real feels for emotional highs as you chat, tip or cam your game experience & do research high performance compute.
*
Proposed VESA & DVB Standard with Video Codecs (MP4+VP9+AV1++) :
Deep Colour Mode : Colour range of (Channel x 3) versus (Channel x 4) in mode set,
In a GPU Graphics card bios & Textures or Images & Video : Rupert S 2021-08-04
bt709 in 10Bit,12Bit,14Bit,16Bit per channel mode is a limited colour range HDR...
In 8Bit (8Bitx 4 : 8,8,8,8) somewhat limited,
However 8x4 is a lot better than 8x3!So what is 8x3 RGB & 8x4?
RGBA (A = Alpha) or RGBX (X= Black to White or light to dark)
Firstly using RGBX multiplies 8,8,8 by 8 so 8 Bits more colour or rather shade,
Most monitors have 4x8 on for example VGA port or HDMI or DisplayPort.
Specifying 8,8,8,8 in the DAC; Digital Converter & hence the port makes colour range 8 x the total amount...
24Bit becomes 32Bit, Internally inside the game engine & GPU this may be the case..
However most mode sets avoid the 4th Channel 8,8,8,(8:Missing)
On older cards (2008 or older) this may not even be used; However most cards have the channel..
So we should set the display mode to 8,8,8,8 & not 8,8,8
However HDMI & DVI standards imply Digital 8x4 & 10x4 & 12x4 & 14x4 & 16x4
We should mode scan the Display port socket & cable to the Monitor or TV..
Therefore using all the channels is particularly important to; Colour Depth & Deep Colour (TV supported format)
Probe the specification & examine if we can send data to the monitor in a colour profile LUT..
For example bt709,bt2020,stmpe2084 & Dolby Vision HDR..
Also in mode settings are FreeSync(AMD) & GSync(NVidia) & within these standards; A Range of LUT profiles..
Additionally the LED LUT profile for the Specific LED/QLED/DLQLED Type..
Setting the profile adds to the colour range on display on the Monitor or TV
But firstly Set 4x8,4x10,4x12,4x14,4x16 and not 3x8 because actual colour depth is reduced by one channel..
Not setting the alpha or Black channel & so a total of 24Bit & not 32Bit or 30Bit not 40BitReduces colour depth.
smpte2084/PQ ((Usually 10Bitx3 & Sometimes Deep Colour : 10x4)
(Can be 16Bitx4 for ultra Deep Colour HDR)
bt709 (Usually 8Bitx3 & Sometimes Deep Colour : 8x4)
(c)RS
*
Haptic Vibration on : (by location 3D & distance) : RS
Haptic feedback is a wavelet 3D, Usually black and white image/Texture.
3D Audio is the same but the objective is to feel.
Sometimes the objective is to feel 3D Sound (Breil) & sometimes to feel a wave.
We can compress tones into a wavelet pack,
The significance of accuracy is reduced with complexity.
However we need the right wavelet co-sine & duration (Length x Shape).
The use of FFT to draw polygon elliptoids & curves and non conformist "d Shapes:
Examples:
Horses, Giants .. Mole rats
Fireballs & rain, Swimming ..You go on; We need this game + Engine.
sensation of the ground & the feeling of surfaces, Contains the following things (example)
List:
During an earth quake the ground feels like jelly.
the feeling of a fireball shock wave depends very much on distance;
A horse feels rough or course during a fast ride,
Soft when touched as you feed it..
The nose feels wet but there is steam that adds to the feeling. .
a shocking reality of how complex a feeling is in the mind,
a bow draw has the feeling of string but are we going to represent legs aswell ?
Rough earth & the shaking of legs as we chase the tiger.
Rumble control : Haptic; The living Earth
Some feeling of a haptic world; More sophisticated to process than rumble pack.
*
Revolutions in vector: SVM Machine learning optimised & dynamic point/pointer cached ray tracing
Machine Learning Probability Vector Ray Tracing
ML-PVR-T : Wonderful! ML Probability Vector Ray Tracing
Dynamic Many-Light Sampling for Real-Time Ray Tracing
ReSTIR.pdf (47.85 MB)
https://mirrorace.com/m/X7ia
021-026.pdf (20.02 MB)
https://mirrorace.com/m/X7ib
RayTracing Vectorized_Production_Path_Tracing_DWA_2017.pdf (2.79 MB)
https://mirrorace.com/m/51v6t
Raytracing Cell Vector Unit - AVX.pdf (411.51 KB)
https://mirrorace.com/m/51v6u
Ray Tracing CPU Study 2c2adb30f1ea25eb374839f3f64f9a32b6c7.pdf (6.11 MB)
https://mirrorace.com/m/51v6A
Raytracing Multi-threaded Sycro Burst thread - A_Vectorized_Traversal_Algorithm_for_Ray_Tracing.pdf (753.83 KB)
https://mirrorace.com/m/4lxS6
https://aras-p.info/blog/2018/04/10/Daily-Pathtracer-Part-7-Initial-SIMD/
https://aras-p.info/blog/2018/11/16/Pathtracer-17-WebAssembly/
Area-Preserving Parameterizations for Spherical Ellipses 1805.09048.pdf (5.11 MB)
https://mirrorace.com/m/4lxS5
Sphere Sampling
Peters2019-SamplingSphericalCaps.pdf (13.52 MB)
https://mirrorace.com/m/3FApr
ML SVM Assessment - DDOS Protection - Sustainability-12-01035.pdf (1.11 MB)
https://mirrorace.com/m/1Dro1
Attack and anomaly detection in IoT sensors in IoT sites using ML 1-s2.0-S2542660519300241-main.pdf (2.19 MB)
https://mirrorace.com/m/51v5z
Machine learning for internet of things data analysi 1-s2.0-S235286481730247X-main.pdf (0.99 MB)
https://mirrorace.com/m/3FAb0
Deep Learning Methods for Sensor Based Predictive Maintenance and Future Perspectives for Electrochemical Sensors - Namuduri_2020_J._Electrochem._Soc._167_037552.pdf (1 MB)
https://mirrorace.com/m/51v6D
An ultra-compact particle size analyser using a CMOS image sensor and machine learning s41377-020-0255-6.pdf (2.26 MB)
https://mirrorace.com/m/51v5u
May also help environmental policy:
Processor Applicable Heat Comfort Zones - Application of IoT and Machine Learning techniques 1-s2.0-S1876610218304247-main.pdf (1.21 MB)
https://mirrorace.com/m/51v5C
Deep Learning in Agriculture + Food Supply sensors-18-02674.pdf (1.4 MB)
https://mirrorace.com/m/51v5w
svm-notes-long-08.pdf (1.31 MB)
https://mirrorace.com/m/1Dro9
LTCN Model - Liquid Time Constant Networks
Now we train DLSS and RiS Sharpening & image enhancement nodes based upon the LTCN Model
It is possible to train a LTCN & there are several python examples
Adaptive hyperparameter updating for training restricted Boltzmann machines on quantum annealers
& Wide Path SiMD
IEEE 754 Precision 151193633.pdf (2.29 MB)
https://mirrorace.com/m/1Dro5
The world defined by Science - IEEE754 Precision conformant compute maths - RS 2020-06-07.txt (1.39 KB)
https://mirrorace.com/m/1Dk5c
Sony_to_release_world’s_first_Intelligent_Vision_Sensors_with_AI_processing_functionality.pdf (621.57 KB)
https://mirrorace.com/m/1Dro3
https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/
https://scikit-learn.org/stable/modules/svm.html
https://towardsdatascience.com/https-medium-com-pupalerushikesh-svm-f4b42800e989
https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47?gi=51274a92cf9b
http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf
https://docs.microsoft.com/en-us/machine-learning-server/r-client/what-is-microsoft-r-client
https://docs.microsoft.com/en-us/machine-learning-server/python-reference/microsoftml/microsoftml-package
https://docs.microsoft.com/en-us/machine-learning-server/install/microsoftml-install-pretrained-models
https://github.com/iterative/dvc
https://medium.com/tensorflow/a-gentle-introduction-to-tensorflow-js-dba2e5257702
https://www.seeedstudio.com/Coral-USB-Accelerator.html
Reducing cost & increasing margins : The power of AI Machine Learning
In a staggering study :
https://is.gd/3DMLSorcerer
"Calculated based on a resource usage & testing: “We train XLNet-Large
on 512 TPU v3 chips for 500K steps with an Adam optimizer, linear
learning rate decay and a batch size of 2048, which takes about 2.5
days."
https://venturebeat.com/2020/07/15/mit-researchers-warn-that-deep-learning-is-approaching-computational-limits/
In detail ML:
https://medium.com/syncedreview/the-staggering-cost-of-training-sota-ai-models-e329e80fa82
https://medium.com/syncedreview/cmu-google-xlnet-tops-bert-achieves-sota-results-on-18-nlp-tasks-66f7022f34f5
The latest features of CPU,GPU & Brain chip more than counter trouble:
SVM KNN Tensor (AMD) Brain chip majors Google,Sony Visual vector: Japan,China, IBM,Intel
VRISC: ARM:NVidia
Strategy is important.. the brain chips are mostly about inference,
SVM is about formation & inference implication..
VRISC is about efficiency & power usage..G
PU have many features, FP4,8,16,Float & SiMD
The point is to achieve results that improve much : both efficiency & Accuracy.
(c)RS
SVM Architectural features:RS
CPU & GPU/Processor:
Qualifies 1 to 9 dimensions with elliptic curves,
Into 2 or 3 & Statistics that all can make sense of data,
Under the proposal Elliptic curves from known & recorded messy data sets shall be unitised:
For security
Cypher & GPU/CPU List:RS
SVM Elliptic: Random, Chaos, Entropic like data for security & AI random behaviour
Elliptic curves for security: Pure, Known Messy & Exploratory Messy
Shapes for Games and polygon, Behaviour, Motion, Winds, Rain, Storms : Nature
Mapping & processing Fur and other creative tasks requiring projection or assimily & discovery.
Tessellation.
Used directly though automated delivery to AES & Cryptographic features:
Firmware & kernel.
(c)RS
(c)Rupert S
*
Quantum Neural Networks: Compression & Quantisation for performance boosting:
precision enhancement & time reduction on a 4/6/8Bit Quantum computer
As explained in the article noise is a big extrapolation in quantum computers,
Particularly Machine Learning!
Quantizing the data in to conservative maps with precise values,
Increases precision in our world,The quantum world is imprecise surely,
However our data sets do need output that precisely maps to our Quanta sampling,
During sampling quantum data is particularly vulnerable to Sample extrapolation precision reduction.
Strategy is as follows: List : RS
Points are merged with a remainder bit table,
(The remainder bit table has a number of values in a var table.)
N+var,1/2/3>
SVM Elliptic Quanta table (var)
Elliptic curves are mapped in SVM,
2 to 9 dimension to save space; More for expansive detailing.
Quantisation in this method allows data to merge into reusable Neurons & tables.
Lz4/DOT/GZip compression.
The Remaining table will be accessed in the error correction phase..Usage phase.
The deliberation is to form the resulting data for our quantum sampling;
In as small a package as possible,
Quanta sampling is error prone.
(c)Rupert S
QE_BEC_ML
Quantum Elliptic Bit Compressed Elliptic ML:
By combining the factors:
Resonance, Harmonics & Noise filtration & Noise Cancellation in Machine learning..
With compression & bit filtering, Local node bit-depth is not flooded,
The proposal is that bit instability is created by noise & in addition the work done,
Local data width & power output .. Inside the quantum bit; Powerplay a reset.
The name of a reset in our terms is Destabilised Bit & in other words:
Non function quantum fluctuation system, Combining field control & polarity to maintain stability..
Is best served by adaptive Machine Learning,
Like a chef all the bits of the team controlled dynamically & diplomatically.
Tuning the fields with QE_BEC_ML allows the Irregular bit adaptive Butterfly effect to stabilise the system,
The Man on the job is you.
(c)RS https://science.n-helix.com
University of Chicago
In tandem with the usual electromagnetic pulses used to control quantum systems,
The team applied an additional continuous alternating magnetic field.
By precisely tuning this field,
The scientists could rapidly rotate the electron spins and allow the system to "tune out" the rest of the noise.
Modification that allows quantum systems to stay operational—or "coherent"—10,000 times longer than before.
"Scientists discover way to make quantum states last 10,000 times longer
by Louise Lerner, University of Chicago
MicroCell_VICE_RS(tm) Quantum Useful & Usable
MicroCell_V_ICE(tm) for AI workloads & standard model databases & data
Micro Cell compression with layer pack elliptic vectored Intelligent compression & encryption. (c)RS
Elliptic key defined data in RNDSEED noise:
Encipher by confusion Quantum Quanta - The data to confuse all
Up to 16 databases usable at the same time,
Each has it's own Elliptic key defined to compress & or enhance.
Data sets to be individually compressed in cells.
Noise data set follows elliptic curve standards & can use SVM & Encryption features.
Advantages compression rules work on all data even noise,
1 cell the whole data archive (contains Elliptic noise cell)
Cell Level 2: 2 to 16 (standard practical for multiple ops)(can be more)
All models contain elliptic noise data & are essentially confusing to the decompressor that has no noise key.
*
Use of ML, Precision & Statistic Conversion
For example this works well with fonts & web browsers & consoles or standard input display hubs or User Interfaces, UI & JS & Webpage code.
Solid snakes disciple R Python SVM
Machine learning, The Advanced SVM feature Set & Development
CPU lead Advanced SVM potential
GPU refinement & memory Expansion/Expression/Development
SVM/ML Logic for:
Shaders,
Tessellation,
Compression,
PML Vector Ray-Tracing
Sharpening Image Enhancement:
(S²ecRETA²i)(tm)
Reactive Image Enhancement : ML VSR : Super Sampling Resolution Enhancement with Tessellated Precision & Anti-Aliasing Ai (S²ecRETA²i) + (SSAA)
Color Dynamic Range Quantification, Mesh Tessellation, Smoothing & Interpolation
Finally MIP-MAP optimised sampling with size/distance, dynamic cache compression.
Machine learning,
The Advanced SVM feature Set & New developments..TPU <> GDev,AMD
Extended support for ML means dynamic INT4/8/16/Float types and dot product instructions execution.
GPU/CPU/Feature-set/SVM
Dual compute unit exposure of additional mixed-precision dot-product modes in the ALUs,
Primarily for accelerating machine learning inference,
A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput,
Some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations,
All of which use 32-bit accumulators to avoid any overflows."
Core-ML runs on all 3 hardware parts: CPU, GPU, Neural Engine ASIC;SVM.
The developer doesn’t specify; The software middle-ware chooses which part to run ML models,
Core strategic advice & adaptable SVM CPU <> GPU
https://scikit-learn.org/stable/modules/svm.html
(c)RS
Super resolution AKA resolution enhancement feature:
Is already enabled by super sampling on GN architecture;
The availability of this product really comes down to the pipeline for sampling..
The real investment is in compute shaders that will load the textures with minimal processing extras;
on load time exploitation,
The use of DOT3 to DOT5 compression:
Really means that implementing large scale higher precision A(File storage in ram) To B(Final render + Cache data)..
Creates the situation in decision making where the processor Vector based Texture resolution enhancement,
Comes at little ram storage costs;
When the processing is AVX,SiMD & is a light flow of additional data applied to the texture;
As a cached bump-map & co-modifiers..
The decision to cache the data DOT5 or better compression means that dual loading data is possible given meta data on load...
Combined data can be loaded from game cache (On SSD/HD or in computer RAM),
Given the availability of direct access on the PS4/5 and XBox to the GPU ram from CPU,
Increasingly the use of CPU AVX or Shared SiMD is capable of processing the flow dynamically..
Improving this the shared cache; CPU & GPU; The frame buffer.. Creates the potential; A vast potential to simply leverage the on-DIE CPU & GPU capacity without suffering DMA flow capacity performance issues!
Data width is a pretty important feature to deal with and fortunately we are not stuck with 32Bit or even 64Bit with DRAM potential being 384 Bit on DMA
& also directly though the board PCI3 to PCI5 specifications,
On the PS5 the 7 layer QoS for data transfer & the direct Storage layer technology,
On the XBox permits Direct to RAM DMA for compressed pure DOT3/5 Textures & processing directly in CPU to GPU should not involve pipeline fluctuations or imprecise mapping of the 128 space SiMD Precision; Control & Enhancement dynamic re-compressed adaptive feature set.
(c)RS
ReSTIR Additions
Super Sampling is a technique of loading a texture; Upscaling the texture into a 4x to 8x larger size Cache,
Lacroze & Gaussian Blends combined with sharpening (Also available in AA & Gaussian Sharpening & 3D Spline Interpolation),
Added to sharpening & upscaling is Bi & Tri Linear Interpolation..
Interpolation requires that you estimate points between pixels in the texture or image..
The implementation of Method Example 1 to 4 including Mipmapping [SS][SubS] Frame buffer With Multithreading Micro Framebuffer Groups..
Allows Super-Sampling with Micro-Block Frame Recursive & Forward temporal Predict.
The simple storage of a frame in advance enables the technique,
Once a frame is in the buffer the next frame is managed with:
Included Recursive & Forward frame interpolation.
Sharpening & Image Gaussian Blend, Sharpen & Sub-Sampling Anti-Alias
In the Micro Frame Buffer & Texture Context & Full Frame colour, WCG & HDR Quality optimisations.
Interpolation methods include:
Bit Average differential at higher DPI
Gaussian blending at a higher DPI & Sharpening
Both methods have an additional Method: ML Identify & Classify ResNet
ML Identify ResNet; Identifies the Shape intention & Classifies the object by content.
We can guess that a nose is angular down for example or that a Square will stay square..
MetaData containing the identity of objects helps a lot in classifying.
ML_iRN Resolution Upscale & Texture Scaling
Texture 256 | Texture buffer Size * N +
{
3D Spline Interpolation,
Gaussian,
AntiAlias,
Lacroze
}
Texture Buffer Final | Size * N
(c)RS
Method Example 1 to 4 including Mipmapping Reference
https://science.n-helix.com/2023/02/smart-compression.htmlhttps://science.n-helix.com/2022/09/ovccans.htmlhttps://science.n-helix.com/2022/08/simd.htmlVector Encoding : VECSR
https://science.n-helix.com/2022/04/vecsr.htmlhttps://science.n-helix.com/2019/06/vulkan-stack.htmlhttps://science.n-helix.com/2022/09/audio-presentation-play.html
DL-ML slide : Machine Learning DL-ML
By my logic the implementation of a CPU+GPU model would be fluid to both..
Machine Learning : Scientific details relevant to DL-ML slide (CPU,GPU,SiMD Hash table(M1 Vector Matrix-table +Speed)
The vector logic is compatible to both CPU+GPU+SiMD+AVX.
Relevant because we use Vector Matrix Table hardware.. and in notes the Matrix significantly speeds up the process.
(Quantum Light Matrix)
The relevance to us is immense with world VM servers
DL-ML Machine Learning Model compatible with our hardware
By my logic the implementation of a CPU+GPU model would be fluid to both..
The vector logic is compatible with both CPU+GPU.
However this is a model we can use & train..
DLSS & FidelityFX Super Resolution (c)RS
The strategy to use is as follows:
Firstly our front line defence team is going to use:
https://is.gd/ProcessorLasso (reference material)RS
RAM usage versus clock cycles used per operation for dynamic RAM Caches
SiMD for light fast dithering into 10Bit & HDR & edge estimation with curve estimation tessellation..
16Bit is not ideal as a width but for micro conour curve & polygon quick draw estimates the presence of at least 2 SiMD units per CU is potentially quite relevant...
Tessellation maths fills the gaps in local polygon count & potentially uses ram for 3 to 7 frames
Micro contoured contrast dithering..
Micro contoured contrast shaping & sharpening..
Smoothing
AVX,MMX&SiMD are all capable of keeping dithered results in the local FP(16,32,64) resultant..Compressed texture..
With many MB of capacity to store in local registers the GPU CU is capable of maintaining effectively static resultants & refining them in the localised register group & passing them back to RAM as soon as needed globally..
Refinements to tessellation of texture edges for dithering, AA & Interpolation can be local to Compute unit group & sent to RAM for frame distribution..
Multitasking with this depth will use the high data width Asynchronous caches..Fully excelling the local CU ability to realise the potential to process all SIMD combined with float FP Maths objective of the Compute unit group..
Raytracing element CU add light fidelity & localised radiance,Short prospect & cached results:
Examples of Ray Tracing unite + SiMD usages:
Light interpolation between points on textures combined with bumpmap Localised colour variance..
Quantization of polygon contours..
Image shaping though lensing emulation & distortion..
Flexibility is key here, Static results not always required but for photo mode yes, Unique.
Super Resolution output that enhanced output & input with enhanced Texture Architecture (c)RS
Super Resolution output that enhances output above Display official resolution requires something of the opposite effect to Advanced demosaicing methods like Super resolution image enhancement..
In that we are attempting to analyse the LED/QLED/NED pattern and output a signal in analog or digital that uses every individual RED,Greed,Blue Pixel in a way that 4x the sharpness of the resulting image; While composing the image that in effect we have aliased into Red,Greed,Blue...
Our job is to ensure that the composite signal decomposes into a 4x Resolution output on a display.
In the case of textures we will do the same in reverse (The final layer) Composing into a sharper image..
We apply DOT Image compression wavelets in FP values HDR that composes all details into a smooth sharp HDR Image Texture; Our job is refining Wavelet compression that will advantage from our defined wave patterns..
The advantage is that We made the smoothing sharp wavelet & we will take full advantage from this making our JPG & DOT image & Video..
Wavelet is JPG origine compression..
We lose nothing knowing the pattern advantage & can fully compose to infinity.
SIM_SHARD-HDR SiMD Compression Texture & Image formats
& SIM_SHARD-HDR_LiquidDisplay_4_HDMI
&DisplayPort SiMD Compression Texture & Image formats Advantaged Output & Compression.
(C)Rupert S
Mesh Shader pipeline &+ Machine learning
Mesh shaders impress simply for the forward-facing vertex polygons & back-face cull vertices from the game,
Basically Mesh Shaders are there to automate the culling of superfluous polygons & to reduce the work required culling polygons from the back of objects that cannot be seen or objects in the distance..
Considerably reducing work on game performance; While increasing polygon count..
(c)RS
CoarseMESH DynamicPatchBunch Shading : A+B+C MESH
Coarse pixel shading &@ PatchShading &@ BunchShading are Approximately grouped with VRS. (c)RS
The difference is that precision is varied..
BunchShading : Group cache repeat; Area mesh; MESH Shading &@ Compatible
PatchShading : Variable patch dynamic; Cache lose+Gain
CoarsePixelShading : Data VRS; Approximates; Primative+Fast
CoarseMESH DynamicPatchBunch Shading : A+B+C MESH
A: CoarsePixelShading
B : PatchShading
C : BunchShading
From a GCN perspective 16x16 or 32x32 patch tessellation (2x16 per cu) patch render,
The Switch & the PS4 would benefact,
The PS5 & XBox would most surely do 32x32 & 64x64..However Bunch 4x16 would cluster well on both?
I believe so.
Bunch shading is grid shading where frequent groups process patches for periods of time,
Improving efficiency!
Shaders on fire!
Rupert S https://science.n-helix.com
"Figure 9: Our algorithm accesses triangles in the vicinity of the currently rasterized triangle during patch-space pixel shading, and thus
benefits from a tessellation scheme with good spatial locality.
This is not unique to AMFS, As a good triangle ordering also improves memory
coherency for texture/buffer accesses.
Left: four different patterns were tested for the scene in Figure 4 (right), with the default (as generated by OpenSubdiv)
Giving results close to the Morton and Hilbert space-filling curves.
Note that scanline order causes cache thrashing when
an entire row does not fit, As expected.
Right: three different tessellation levels were tested.
Even at fine tessellations, e.g., 2 ×64 2 = 8k
triangles per patch,
A moderately sized (64 vertices) domain shading cache works well (7.0–16.7% DS re-shading).
"
**
High precision digital map-scale ML Virtualise Function : HPD_MMVF : (c)RS
My idea is that you literally do not have to upscale the polygon or tessellation in VR Resolution..
How does this work ?
Polygons are full resolution object because of the maths involved being high resolution.
Polygon drawing at 8K is not the problem..
All polygons are rendered on 16Bit, 32Bit or higher SiMD..
Or very high definition float operations.
The textures are rendered in a resolution of objective performance at the limit of the performance profile..
What does this mean ?
It means we render in 16Bit, 32Bit (SiMD)
Or higher float, Single or double precision.. Where required.
How and why ?
The rendering resolution of the source in the Pixel render pipeline, Rasters ..
Resolved at our convenience & our performance demographic.
Resolving Raytracing render is a non physical Floating unit heaven & we render into the frame buffer at the full precision of the Float unit operation we desire & or need.
From this perspective the Machine learning upscaler has all the advantages of knowing that we have as many of the source materials as convenient to us,
(Optimal performance at the resolution that maximizes performance at the Material precision we require)
In the case of using a Float Operation or a SiMD..
We use our full render precision..
16Bit
32Bit
64Bit
128Bit
We decide what we will use based on performance & the commonness of the required asset.
Polygons have a lower memory requirement..
Textures are prioritised: Quality? yes but compressed asset.
All shader operations in the pipeline are cached as non raster where applicable..
Same with polygons.
Rendering fast by upscaling into our ultra high resolution base map of polygons..
Makes Anti Aliasing lower resolution Samples a breeze to rasterize..
Yes into upscaled ML that maps the base polygon for:
The Emulation of advanced High fidelity 3D & 2D
(c)RS
Digital Vector Machine Learning : DVML
I have full confidence this Technique will help reduce both training time and cost of all DLSS training & also improve quality
Please remember that Raytracing CU are Great for Tessellation as well,
There are quite a few things like LUT tables where Vector Functions of Raytracing CU are more useful than the traditional CU
Be Great Real 64Bit Vectored Audio (update drivers now)
Be great XR Dolby Gold Awards Camera's Smart Phones & TV's
Be very good for Games; Cameras; The televisions & Phones to have my system : RS : Great Business
(c)RS
My Super Resolution Technique will save considerable effort Emulating curves & polygons in Quadratics...
At the end of the day if the XBox does 97TOPS 4Bit & 47TOPS 8Bit...
All the AMD range since 2010 have enough TOPS for the Machine Learning to be justified,
Saving on 20000 Quadratic Curve Emulations per frame.
Machine learning has a place; As i say:
Utilization of the FPU & SiMD Curves & polygon's before rasterization..
Completely removes the necessity to Emulate the Curve/Polygon from the RASTER ROPE..
Machine Learning ML TOPs are not forced to work as hard,
Resolution Emulation is more correct & The results both easier to obtain..
Anti Alias & Emulate.
To put it simply a screen of 16000x4000 in the mind of a FPU becomes a raster of 1600x920
The float unit does contain a much superior vector & ML has less to learn.
(c)Rupert S
VRSS : Virtual Resolution SuperSampling with optimised display resolving (c)Rupert S
As you know Supersampling is the technique RS proposed..
for VSR & Super resolution
Personally i appreciate the effort you guys put into the product and the confidence you have in my process
https://www.youtube.com/results?search_query=Deep+learning+super+sampling
However implementation is confusing for you all; So i will explain a simple sounding theory of use.
List:
Virtual Screen Resolution is the unavoidable result of the client trying to play a game in HD on a 1080 monitor,
However the technology does allow us to present very sharp looking virtual 8K on a 4K monitor..
Up or down, we can present stunning results..
Super resolution is where we present a lower dynamic resolution in the backend & present it in:
A higher resolution like 4K
Virtual Screen Super resolution is the final proposed object of ambition where we ..
Both present to the maximum resolution of the monitor (4K for example)& Present a lower resolution on the backend...
But also present a Dynamic resolving Super Sampled presentation to the front end
(The display)In 8K (for example) with a 4K real resolution
(highly precisely AA & Super Sampled in 8K)..
With a lower dynamic resolution on the back end.
To make this clear presenting dynamic content is problematic but very rewarding to the viewer, However presenting a clocked performance analysed & efficient content requires a simple process for our Machine learning Servers:
VRSS List:
Performance of the card. Desired frame rate.
Quality of the monitor & resolution; The Speed of the HDMI display link.
The precision of the monitor (HDR 12Bit for example + Vision)
DAC Quality for link
The precision of the HDMI link is used to subtly dither the results in 12Bit; Can be done in the monitor & or the Video card.
Precisely how good the results are of our DLSS/VRSS/VSR/VRSR is determined largely by how subtly we present the results...
12Bit Dolby Vision presents the most subtle results we can achieve..
However only the GPU needs to present it from texture stage to presentation for use to begin to see results..
Compatible monitors presented to 12Bit Dolby Vision presents the most subtle results we can achieve..
in our capacity to emulate though engine; The most subtle illusion & quality that we can as yet present to the viewer.
In order to present the highest fidelity into our:
VSR Virtual Screen Resolution 1080P monitor with: supersampling to 4K into 1080p
Boxed Resolve VSR : 4K Monitor, Internal resolve 1800p frame buffer, Super Sampled to 6K & projected into 4K
We sample the SiMD & float maths, As we know SiMD is 16Bit Float or 32Bit Float FP16:FP32
We can optimise the output sharpening with:
Subpixel Sampling SSA and Pixel averaging 4x4 Pixel Sampling,
We can use larger or smaller samples as we prefer.
We are able to use FFT Super Sampling with shape mapping & average elliptoids or lines,
The Average centric value of curvature over an interpolated sample distance of 3 to 8 points.
3 to 8 points allows quick maths with for example:
4Bit 8Bit ML
3 points to 16 Point mapping allows us to interpolate/Tessellate within the 16Bit SiMD Buffer Cache & is of reasonable precision.
We make a map between these points that curves with FFT SiMD Maths & this is called tessellation or interpolation..
We can do this with both Textures & polygons, The FFT Average allows us to:
Create a curve that average smooths between the samples & does not require us to use a line.
For our purpose the curve or the line represents the most accurate result & we would like to know which!
So we sample deviation.. Examples?:
Mandelbrot
Equations
Graphs
Fonts
LUT Colour Pallet
Audio & 3D Audio
Wave patterns
Haptic feedback
For research super sampling such as wing vibration or audio or for example resampling art work or rain dynamics. RS
The use of FFT to draw polygon elliptoids & curves and non conformist "d Shapes:
https://www.kfr.dev/
Advanced FFT & 3D Audio functions for CPU & GPU https://gpuopen.com/true-audio-next/
Accuracy levels are upto developers..
Perception is all ours.
Super Resolve Technique will save considerable effort Emulating curves & polygons in Quadratics...
At the end of the day if the XBox does 97TOPS 4Bit & 47TOPS 8Bit...
All the AMD range since 2010 have enough TOPS for the Machine Learning to be justified,
Saving on 20000 Quadratic Curve Emulations per frame.
Machine learning Utilization of the FPU & SiMD Curves & polygon's before rasterization..
Completely removes the necessity to Emulate the Curve/Polygon from the RASTER ROPE..
We can scope SiMD & FPU Maths precision outside of understanding the SDK they used..
For precise representation of our desired output virtualisation,
Eather:
Original resolution x upscale + SiMD+FPU Vector Scope (the code run by the application or game)
Original resolution x upscale + SiMD+FPU Vector Scope; Into Virtual resolution
To Vector Scope: To understand the maths processes run by the program..
In order to improve precision of the output; We Know that the SiMD+FPU is a lot higher precision..
Than the output Display Resolution,
We can therefore promote the resolutions of all elements in Float values to vector quality.
Vector scope (the code run by the application or game)
We can then Machine Learn from Scope & that equals superiour results,
But we can also directly apply those results though SiMD+FPU maths.
(c)Rupert S https://science.n-helix.com
Game performance counter : Variable Rate Shading (VRS) : Super-Sampling & Shader Vertex cull
VRS Variable Rate Shading (dynamic back culling with tessellation)
FPS & Compute core workload, Heat Polygon count tessellation & interpolation of a dynamic nature.
The use of this technique combines with MESH Shader Polygon Culling..
To improve & optimise the operation of 3D Graphics; Without necessitating the re-evaluation of a polygon maps vertex count or rewriting a lot of game code.
Functional usage of VRS and Mesh Shaders; Limits the amount of work you have to put into optimising a game for framerate & Image quality..
You are still advised to optimise your original Polygon objects for polygon count & obviously a higher amount of optimally positioned vertex does allow higher levels of tessellation & a better game look..
Just as with Textures the polygon count of objects is to be compressed & saved as glTIF
Mesh count & hinting (such as font hint in TT Fonts)
Quality of a game then varies with higher performance hardware but will still run on 4GB PS4/XBox
Some patching to culling back planes in older games would allow even consoles like PS4 & Xbox One:
Games to be improved by:
Dynamic patch of the older Culling & Tessellation standards ..
Across the front plane & secondary &+ (Up to 8 planes)
&or alternatively through assessing image focus though ML Machine learning Image focus
& sharpness over distance & size..
Thusly applying VRS though the virtual layers of a 3D Render.
https://publik.tuwien.ac.at/files/publik_284281.pdf
https://www.cl.cam.ac.uk/teaching/1718/Graphics/Introduction_to_Graphics_2017_1pp.pdf
VRS is now in use on most modern progressive hardware.
(c)RS
FFT & fast precise wave operations in SiMD : (c)RS
Several features included for Audio & Video : Add to Audio & Video drivers & sdk i love you <3 DL
In particular i want Bluetooth audio optimised with SiMD,AVX vector instructions & DSP process drivers..
The opportunity presents itself to improve the DAC; In particular of the Video cards & Audio devices & HardDrives & BDBlueRay Player Record & load functions of the fluctuating laser..
More than that FFT is logical and fast; Precise & adaptive; FP & SiMD present these opportunities with correct FFT operations & SDK's.
3D surround optimised the same, In particular with FFT efficient code,
As one imagines video is also effected by FFT ..
Video colour & representation & wavelet compression & sharpness restoration..
Vivid presentation of audio & video & 3D objects and texture; For example DOT compression & image,Audio presentation...
SSD & HD technology presents unique opportunities for magnetic waves and amplitude speculation & presentation.
Waves & Shape FFT original QFFT Audio device & CPU/GPU : (c)RS
The use of an FFT simple unit to output directly: Sound
& other content such as a BLENDER or DAC Content : (c)RS
Analogue smoothed audio ..
Using a capacitor on the pin output to a micro diode laser (for analogue Fibre)
Digital output using:
8 to 128Bit multiple high frequency burst mode..
(Multi Phase step at higher frequency & smooth interpolated)
Analogue wave converted to digital in key steps though a DAC at higher frequency & amplitude.
For many systems an analogue wave makes sense when high speed crystal digital is too expensive.
Multiple frequency overlapped digital signals with a time formula is also possible.
The mic works by calculating angle on a drum...
Light.. and timing & dispersion...
The audio works by QFFT replication of audio function..
The DAC works by quantifying as Analog digital or Metric Matrix..
The CPU/GPU by interpreting the data of logic, Space & timing...
We need to calculate Quantum is not the necessary feature;
But it is the highlight of our:
Data storage cache.
Our Temporary RAM
Our Data transport..
Of our fusion future.
FFT Examples : https://is.gd/ProcessorLasso in the SiMD Folder...
Evaluation of FFT and polynomial X array algebra .. is here handled to over 50Bits...
As we understand it the maths depends on a 64bit value with a 128Bit ..
as explained in the article value have to be in identical ranges bit wise, However odd bit depth sizes are non conforming (God i need coffee!)
In one example (page 9) Most of the maths is 64Bit & One value 128Bit "We therefore focus in this article on the use of floating-point (FP) FMA (fused multiply-add) instructions for floating-point based modular arithmetic. Since the FMA instruction performs two operations (a ∗ b + c) with one single final rounding, it can indeed be used to design a fast error-free transformation of the product of two floating-point numbers"
Our latest addition is a quite detailed example for us
High performance SIMD modular arithmetic for
polynomial evaluation 2020
Pierre Fortin, Ambroise Fleury, François Lemaire, Michael Monagan
Contains multiple algorithm examples & is open about the computer operations in use.
(c)Rupert S
*
ORO-DL : Objective Raytrace Optimised Dynamic Load & Machine Learning : RS
Simply places raytracing in the potent hands of powerful CPU & GPU Features from the 280X & GTX 1050 towards newer hardware.. While reducing strain for overworked GPU/CPU Combinations..
Potentially improving the PS4+ and XBox One + & Windows & Linux based source such as Firefox and chrome
Creating potential for SiMD & Vectored AVX/FPU Solutions with intrinsic ML.
This solution is also viable for complex tasks such as:
3D features, 3D Sound & processing strategy.
Networking,Video & other tasks you can vector:
(Plan,Map,Work out,Algebra,Maths,Sort & compare,examine & Compute/Optimise/Anticipate)
(Machine Learning needs strategy)
Primary Resources of Objective Raytrace:
Resource assets CPU & GPU FPU's precision 8Bit, 16Bit, 32bit + Up to capacity,
Mathematical Raytrace with a priority of speed & beauty first,
HDR second (can be virtual (Dithered to 10Bit for example) AVX & SiMD
(Obviously GPU SiMD are important for scene render MESH & VRS so CPU for both FPU & Less utilised AVX SSSE2 is advisable)
Block render is the proposed format, The strategy optimises load times at reduced IRQ & DMA access times..
Reducing RAM fragmentation & increasing performance of DMA transferred work loads.
Block Render DMA Load; OptimusList:
64KB up to 64MB block DMA requested to the float buffer in the GPU for implementation in the vertice pipeline..
Under the proposal the Game dynamic stack renders blocks in development testing that fit within the requirements of the game engine,
Priority list DMA buffer 4MB 16MB 32MB 64MB
The total block of Ray traced content & Audio, Haptic, Delusional & Dreamy Simulated,
SiMD Shader content that fits within the recommended pre render frame limit of 3 to 7 frames..
1 to 7 available & Ideally between 3 & 5 frames to avoid DMA,RAM & Cache thrashing..
and Data load.
As observed in earlier periods such as AMIGA the observable vector function of the CPU is not so great for texturing, However advancements and necessity allow this.
SiMD Shader emulation allows all supported potential and in the case of some GPU..AVX2, AVX 256/512 & dynamic cull...
The potency is limitless especially with Dynamic shared AMD SVM,FP4/8/DOT Optimised stack.
Background content & scenes can be pre rendered or dynamically (Especially with small details)..
In terms of tessellation & RayTrace & other vital SiMD Vector computation without affecting the main scene being directly rendered in the GPU..
Only enhancing the GPU's & CPU's potential to fully realise the scenes.
Fast Vector Cache DMA.
So what is the core logic ?
CPU Pre frame RayTrace is where you render the scene details: Mode
Plan to use 50% of processor pre frame & timed post boot & in Optimise Mode :RS :
50% can be dynamic content fusion.
Integer(for up to 64Bit or Virtual Float 32Bit.32Bit)(Lots of Integer on CPU so never underestimate this),Vector,AVX,SiMD,FPU processed logic ML
The Majority of the RayTrace CPU does can be static/Slow Dynamic & Pre planned content.
(Pre planned? 30 Seconds of forward play on tracks & in scene)
Content with static lights & ordered shift/Planned does not have to be 100% processed in the GPU.
To be clear CPU/GPU planned content can be transferred as Tessellated content 3D Polygons or as Pre Optimised Lower resolution Float maths & shaders.
(c)RS
Ray Filter & Limited Processor use efficiency optimisation pass : RFLP_EOP : RS
Ray masking, Filtering & low/High Filter pass 3D Video,
Audio & Haptic feedback & VRS/Mesh Shading, denoising & demosaicing & sharpening 3D Raytracing
(3D Audio positioning for example is a 4 to 32 Ray effect SiMD,AVX,Low memory constraints 1MB Buffer x 7.1)
This does work for 3D Effects on Bluetooth & Audio devices
(RS)
Examples of optimisation of the Denoiser include:
M1 (apple) Vector Array Unit VAU to speed up the Bitmask arrays that asses the denoiser
or AVX 128Bit Mask on CPU or 32Bit Arrays x 8 (for example)Effective for sound drivers & 3D
We can use machine learning (small scale SiMD & VAU) to asses the statistics of frame anticipation and motion..
(this will not cost a lot of learning, relevant to phones & older GPU & CPU)
We can use ML to sharpen after denoise & to ratio statistics on noise values versus sharpening,
Particularly shadows sharpened after denoise level 2 at low cost.
We can use ML to optimise the Read/Write Cycle & Caches..
We can learn on the local host & effectively learn across a whole world...
(Limitless samples)
Denoising Raytraced Soft Shadows on Xbox Series X|S and Windows with FidelityFX
(Presented by AMD)
& Rays of expectation presented by Rupert Summerskill
(c)RS
IO & DMA System Drivers & Data Throughput: CPU/GPU/Compute Unit : Scheduling works 3+1vdat ways: (c)RS
Smart compute shaders with ML optimising sort order:
Sort = (Variable Storage (4Kb to 64Kb & up to 4Mb, AMD Having a 64Bit data ram per SiMD Line)
Being ideal for a single unit SimV SimD/T & data collation & data optimisation,
With memory Action & Location list (Variable table),
Time to compute estimator & Prefetch activity parser & optimiser with sorted workload time list..
Workloads are then sorted into estimated spaces in the Compute load list & RUN.
IO & DMA System Drivers & Data Throughput: CPU & GPU & FPU Anticipatory scheduler with ML optimising sort order:
Sort = (Variable Storage (4Kb to 64Kb & up to 4Mb, AMD Having a 64Bit data ram per SiMD Line)
Being ideal for a single unit SimV SimD/T & data collation & data optimisation,
With memory Action & Location list (Variable table),
Time to compute estimator & Prefetch activity parser & optimiser with sorted workload time list..
Workloads are then sorted into estimated spaces in the Compute load list & RUN.
IO & DMA System Drivers & Data Throughput: Open CL, SysCL cache streamlined fragment optimiser with ML optimising sort order:
Sort = (Variable Storage (4Kb to 64Kb & up to 4Mb, AMD Having a 64Bit data ram per SiMD Line)
Being ideal for a single unit SimV SimD/T & data collation & data optimisation,
With memory Action & Location list (Variable table),
Time to compute estimator & Prefetch activity parser & optimiser with sorted workload time list..
Workloads are then sorted into estimated spaces in the Compute load list & RUN.
IO & DMA System Drivers & Data Throughput: TPU fragment : ML Inference Open CL, SysCL,
Shader cache & Cache/RAM streamlined fragment optimiser with ML optimising sort order:
Sort = (Variable Storage (4Kb to 64Kb & up to 4Mb, AMD Having a 64Bit data ram per SiMD Line)
Being ideal for a single unit SimV SimD/T & data collation & data optimisation,
With memory Action & Location list (Variable table),
Time to compute estimator & Prefetch activity parser & optimiser with sorted workload time list..
Workloads are then sorted into estimated spaces in the Compute load list & RUN.
(c) Rupert S
Potential usage include:
3D VR Live Streaming & movies : RS
With logical arithmetic & Machine learning optimisations customised for speed & performance & obviously with GPU also.
We can do estimates of the room size and the dimensions and shape of all streaming performers & provide 3D VR for all video rooms in HDR 3DVR..
The potential code must do the scene estimate first to calculate the quick data in later frames.
Later in the scene only variables from object motion & a full 360degree spin would do most of the differentiation we need for our works of action & motion in 3D Render.
The potential is real, For when we have real objective, dimensions & objects? We have real 3D.
The solution is the mathematics of logic.
All this can be ours :
Witcher 3 Example Video
https://science.n-helix.com/2018/01/integer-floats-with-remainder-theory.html
Technical video : Tanks :
https://www.youtube.com/watch?v=LGBHkpYq9hA
3D VR Haptic & learn: RS
Conceptually the relevance of mapping haptic frequency response is the same parameter as in ear representative 3D sound.
For a start the concept of an entirely 3D environment does take the concept of 2D rendering the 3D world & play with your mind.
Substantially deep vibration is conceptually higher & intense pulse is thus deeper,
However the concept is also related to the hardness of earth & sky or skin.
Ear frequency response mapping is a reflection of an infrared diode receptor & infra-sound harmonic 3D interpretation, Such as Sonar & Radar.
Game Raytrace & Refraction Logic ML: (c)RS
Use several low precision shape maps and not boxes,
In particular the tank is not transparent
True Depth is most likely for glass,
However the shape of the object in slices along the tank (like a 16 part skin inter-sector,
The Size of Inter-section boxes is (length/16) * (Width/16) * (Depth/16)
Depth also for Glass & tank height & involved a volume format like so:
The Size of Inter-section boxes is (With Transparency)
On Contact = (length/16) * (Width/16) * (Depth/16) = Size |
(Size/% from one end) + (% One Side) = Location |
(Arc,Sin,Tan + Location) * % Opacity = RayDepth |
(Arc,Sin,Tan + Location) * % Refraction index = RayDepth + Probable location
Optimising the number is tested on performance test of a (simple varied complex) Object scene & compared to results of previous results for GPU & CPU Type,
With also a ram block such as 4GB/8GB/16GB
(c)RS*
Raytracing potent compute research's:
https://bitshifter.github.io/2018/06/04/simd-path-tracing/
Realtime Ray Tracing on current CPU Architectures :
https://aras-p.info/blog/2018/04/10/Daily-Pathtracer-Part-7-Initial-SIMD/
https://aras-p.info/blog/2018/11/16/Pathtracer-17-WebAssembly/
Demo WebAssem: NoSiMD: SiMD & AVX Proof of importance
https://aras-p.info/files/toypathtracer/
HDR Raytracing - Thoughts & Theory - Mine Kraft.zip (7.33 MB):
https://mirrorace.com/m/4lOdx
https://is.gd/3DMLSorcerer
Update confirmed:
Nvidia even ray-traced the 980! in Vulkan ... Works on AMD,Quadcom, Android, NVidia and PowerVR..
The potential exists for all,
Powerful CPU's & GPU's make all possible #TraceThatCompute2020 .
QUiDOC-ML
Quick and uncomplicated dynamic feedback content optimisation of sub pixel
data and meshes
ML OM-FT DFS
Optimised Micro Force Tessellation Dynamic Fragment Shading : ML OM-FT DFS
Firstly the list is as follows:
Polygon & Shader : Memory array allocation with tessellation percentile availability
Scene Polygon Mesh load
Secondary Memory array allocation
Optimised list Texture resource load/Pre fetch
Resource availability assessment for dynamic content
Tessellation of on screen & in view content & static
In scene data :
Static load tessellation with dynamic vertic modification buffer (a small piece of shared data cache (up to 2GB))(Tessellation and shaders with Mipmap have modest requirements in HD)
Optimised Micro Force Tessellation Dynamic Fragment Shading : OMFT DFS
*
Screen resolution enhancement: up-scaling & downscaling: 4D-Vector Enhancement: Kernel+hint 3D:
Tessellation of the 2D/3D plane surface on the screen buffer,
3D component render into output frame buffer, With RiS with micro smoothing predictive tessellation.
The objective is to present the user with a virtual resolution of almost unlimited size,
From the 2D,3D,4D,5D 8, texture, poly-map, shader pipeline..
After we upscale the vector construction to whatever level we like with tessellation to the render buffer; We will apply texture map with AA + RiS Sharpening SiMD, bump and shader mapping..
Apply Multi-thread,SiMD,AVX,Vector unit or float combinations; To all render targets in the pipeline.
Bearing in mind that the polygonal representation of shadows after we apply the SiMD,AVX,Vector unit or float combinations; To all render targets in the pipeline..
Does not consume the level of RAM that Textures will use in our pipeline,
However applying Vectored AA & sharpening to textures has the potential to hold the maths/Shader resultant float/Integer in the cache.
So by preference we have the ability to use ether more ram for texture + Compression & also shader/float result & N component pre-render target maths/Variables.
This shall be fast & consume less ram with DOT3/4/5 ARGB compression.
Principally render into a virtual frame will be AA+Sharpen+Tessellation enhancement.
Tessellation of 2D VR target output frame to map the colour & sharpening AA ..
Into the final frame that shall be smooth & look observably like vector fonts do with kernel fonting,
AKA kernel vector with hinting; smooth,sharp & clean.
Virtual Render path:For upscaling Cyberpunk 2077
1440/2160 into 4K,8K
(Does not have to mode set 4K to Virtual Resolution 8K)
Virtual Resolution is a method of superSampling into a lower resolution;
That smooths & sharpens the look; Removing jagged edges.
4x4SuperSample @ 4K & then 2x2 super sample to 8K
(One pass 6xSuperSample may be worth it for 8K)
2 pipelines to tessellate Lines & textures SiMD to sharpen edges & features
3 pipelines of SiMD to add additional Logic sets:
PreFrame ML (forward render (Biking for example is a linear path)
Wavelet Smoothing & Color,Tone,HDR,WCG render pass
Fine edge rounding & Tessellation.
(c)RS
(c)Rupert Summerskill
****
LUT tables and tone mapping: Vectors
https://gpuopen.com/using-amd-freesync-2-hdr-gamut-mapping/
On the subject of LUT tables and tone mapping, 2 methods are available to us..
The Vectors can be mapped RT with ray tracing (they work out the vector)
The Vectors and dimensions can also be worked out with Open CL and Direct Compute..
Both OpenGL/Vulkan & Direct X have direct compute..
Many forms of vector calculation that involve intricate maths can be worked out in vector or OpenCL Vector library function, The advantage of Open CL Libraries are that functions and tables can be worked out without ever having to re program the maths solving OpenCL Code,
Such that Open CL & Direct compute libraries can for-fill many tasks, Bearing in mind that Open CL & Direct compute are work solve time controlled we are able to use the functions for many tasks including web browser maths and composure, With these examples we' will define the future of display maths code & logic.
AVX & Float can obviously be used leaving Compute vectors like SIMD viable for code logic.
Compute Shaders are also able, Long logic denotes the advantage of Vectored OpenCL & Direct compute/AVX.
Vectored code : tessellation & other functions using SIMD & Compute Shader maths:
https://www.youtube.com/watch?v=0DLOJPSxJEg
Ultra High Definition Colour :
Video Colour definition smoothing & Optimisation with sharp edge HDR Contrast Adaptation.
Dynamic colour remap & Optimisation,
Wide path 8 512Bit,256Bit,128Bit,96Bit 8; 16Bit per channel into & from 10Bit per channel & 8Bit Per channel ..
With dynamic hardware accelerated Colour translation & super dithering with AA in transparent ranges, LOD Translation in vectored 3D though FPU/GPU/AVX/SiMD.
https://science.n-helix.com/2014/08/turning-classic-film-into-3d-footage-crs.html
https://science.n-helix.com/2019/06/vulkan-stack.html
(c)Rupert S
Have you thought about using shaders in Networking ? to realise the network data strategy...
The same is true for displays & Audio & other Science data such as Neural networks,
Image improvement and encoding & entertainment video codecs, 64Bit HDR Dynamic Contrast
We can apply the interpolation to video for smoothing and vectorisation of the video elements in float for sharpening & to our interpolation for tessellation of the RiS sharpening for all our GPU and CPU elements.
Great for games without the direct feed of 30GB of DOT3/4/5+ compressed texture cache,
Cache & Layer download from fast B-Ray,ROM's & Storage: Cache dynamic.
GLTF, DOT3 to DOT5 compress all textures; At a minimum in 4Bit too 16Bit per channel,Optimised layer patching, That is when we overlay Higher Bit depth Textures & HLSL Shaders..
In layers on GPU/CPU/Vector/Float processed & merged texture content,The lower bit depth base texture is optimised JPG style and GIF & Merged,
Lower order bumpmap & Shaders are merged into the mipmap layer, To reduce processing overhead; At a reasonable rate of memory usage,
Combined order Process CSS:JS allows 2kb files too merge multiple jason : All are GZ, LHA7 compressed & optimised/Minified,
Storage of Large file is Internal Slot/External HDD & BlueRay/DVD/USB Key Flash & Micro HDD, At 8GB too 2TB minimum specs : USB2/3/3.2The higher the data rate on test, The higher the desired storage profile that ML will allocate :
Dynamic Allocation ML: User Option: Default : External USB Drive for data loading under 250MB/S & 64GB+ of space.
Core library Re-evaluation for replacement with upgraded libraries.
As the compression formulas are introduced into the library of games on the servers..
Core game packs 64MB between 2GB are Plug & Play, Downloaded into place on the Console,
No decompression is needed; The level packs & core compression texture blocks are stored..
As Micro FastLayerCompression/Decompression with quick sort Pre-Compressed Texture formats in LVM/VM/VMD drives..
Optimally they will include 5 Game 15Min play.. auto saves worth of location content.
Obviously core 1Mb to 10MB downloads of cache data in the GameHIVE VM Dynamic Cache drive..
Micro 15KB to 250KB Dynamic scenario content : Weather, Enemies, Updates, Friendly data.
Game cloud storage philosophy to be based upon: Upvote, Pro Review & Necessity.
Is optimised for texture & vertices file re compression & optimisation.