Indicators for binary options without redrawing

ResultsFileName = 0×0 empty char array Why? Where are my results?

Edit: Turns out I was missing a needed toolbox.
Hello,
I am not getting any errors and I do not understand why I am not getting any output. I am trying to batch process a large number of ecg signals. Below is my code and the two relevant functions. Any help greatly appreciated. I am very new.
d = importSections("Dx_sections.csv"); % set the number of recordings n = height(d); % settings HRVparams = InitializeHRVparams('test_physionet') for ii = 1:n % Import waveform (ECG) [record, signals] = read_edf(strcat(d.PID(ii), '/baseline.edf')); myecg = record.ECG; Ann = []; [HRVout, ResultsFileName] = Main_HRV_Analysis(myecg,'','ECGWaveform',HRVparams) end function [HRVout, ResultsFileName ] = Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % ====== HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox ========= % % Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % OVERVIEW: % % INPUT: % InputSig - Vector containing RR intervals data (in seconds) % or ECG/PPG waveform % t - Time indices of the rr interval data (seconds) or % leave empty for ECG/PPG input % InputFormat - String that specifiy if the input vector is: % 'RRIntervals' for RR interval data % 'ECGWaveform' for ECG waveform % 'PPGWaveform' for PPG signal % HRVparams - struct of settings for hrv_toolbox analysis that can % be obtained using InitializeHRVparams.m function % HRVparams = InitializeHRVparams(); % % % OPTIONAL INPUTS: % subID - (optional) string to identify current subject % ann - (optional) annotations of the RR data at each point % indicating the type of the beat % sqi - (optional) Signal Quality Index; Requires a % matrix with at least two columns. Column 1 % should be timestamps of each sqi measure, and % Column 2 should be SQI on a scale from 0 to 1. % Use InputSig, Type pairs for additional signals such as ABP % or PPG signal. The input signal must be a vector containing % signal waveform and the Type: 'ABP' and\or 'PPG'. % % OUTPUS: % results - HRV time and frequency domain metrics as well % as AC and DC, SDANN and SDNNi % ResultsFileName - Name of the file containing the results % % NOTE: before running this script review and modifiy the parameters % in "initialize_HRVparams.m" file accordingly with the specific % of the new project (see the readme.txt file for further details) % EXAMPLES % - rr interval input % Main_HRV_Analysis(RR,t,'RRIntervals',HRVparams) % - ECG wavefrom input % Main_HRV_Analysis(ECGsig,t,'ECGWavefrom',HRVparams,'101') % - ECG waveform and also ABP and PPG waveforms % Main_HRV_Analysis(ECGsig,t,'ECGWaveform',HRVparams,[],[],[], abpSig, % 'ABP', ppgSig, 'PPG') % % DEPENDENCIES & LIBRARIES: % HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % % REFERENCE: % Vest et al. "An Open Source Benchmarked HRV Toolbox for Cardiovascular % Waveform and Interval Analysis" Physiological Measurement (In Press), 2018. % % REPO: % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % ORIGINAL SOURCE AND AUTHORS: % This script written by Giulia Da Poian % Dependent scripts written by various authors % (see functions for details) % COPYRIGHT (C) 2018 % LICENSE: % This software is offered freely and without warranty under % the GNU (v3 or later) public license. See license file for % more information %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% if nargin < 4 error('Wrong number of input arguments') end if nargin < 5 subID = '0000'; end if nargin < 6 ann = []; end if nargin < 7 sqi = []; end if length(varargin) == 1 || length(varargin) == 3 error('Incomplete Signal-Type pair') elseif length(varargin) == 2 extraSigType = varargin(2); extraSig = varargin{1}; elseif length(varargin) == 4 extraSigType = [varargin(2) varargin(4)]; extraSig = [varargin{1} varargin{3}]; end if isa(subID,'cell'); subID = string(subID); end % Control on signal length if (strcmp(InputFormat, 'ECGWaveform') && length(InputSig)/HRVparams.Fs< HRVparams.windowlength) ... || (strcmp(InputFormat, 'PPGWaveform') && length(InputSig)/HRVparams.Fs 300 s VLF = [0.0033 .04]; % Requires at least 300 s window LF = [.04 .15]; % Requires at least 25 s window HF = [0.15 0.4]; % Requires at least 7 s window HRVparams.freq.limits = [ULF; VLF; LF; HF]; HRVparams.freq.zero_mean = 1; % Default: 1, Option for subtracting the mean from the input data HRVparams.freq.method = 'lomb'; % Default: 'lomb' % Options: 'lomb', 'burg', 'fft', 'welch' HRVparams.freq.plot_on = 0; % The following settings are for debugging spectral analysis methods HRVparams.freq.debug_sine = 0; % Default: 0, Adds sine wave to tachogram for debugging HRVparams.freq.debug_freq = 0.15; % Default: 0.15 HRVparams.freq.debug_weight = .03; % Default: 0.03 % Lomb: HRVparams.freq.normalize_lomb = 0; % Default: 0 % 1 = Normalizes Lomb Periodogram, % 0 = Doesn't normalize % Burg: (not recommended) HRVparams.freq.burg_poles = 15; % Default: 15, Number of coefficients % for spectral estimation using the Burg % method (not recommended) % The following settings are only used when the user specifies spectral % estimation methods that use resampling : 'welch','fft', 'burg' HRVparams.freq.resampling_freq = 7; % Default: 7, Hz HRVparams.freq.resample_interp_method = 'cub'; % Default: 'cub' % 'cub' = cublic spline method % 'lin' = linear spline method HRVparams.freq.resampled_burg_poles = 100; % Default: 100 %% 11. SDANN and SDNNI Analysis Settings HRVparams.sd.on = 1; % Default: 1, SD analysis 1=On or 0=Off HRVparams.sd.segmentlength = 300; % Default: 300, windows length in seconds %% 12. PRSA Analysis Settings HRVparams.prsa.on = 1; % Default: 1, PRSA Analysis 1=On or 0=Off HRVparams.prsa.win_length = 30; % Default: 30, The length of the PRSA signal % before and after the anchor points % (the resulting PRSA has length 2*L) HRVparams.prsa.thresh_per = 20; % Default: 20%, Percent difference that one beat can % differ from the next in the prsa code HRVparams.prsa.plot_results = 0; % Default: 0 HRVparams.prsa.scale = 2; % Default: 2, scale parameter for wavelet analysis (to compute AC and DC) %% 13. Peak Detection Settings % The following settings are for jqrs.m HRVparams.PeakDetect.REF_PERIOD = 0.250; % Default: 0.25 (should be 0.15 for FECG), refractory period in sec between two R-peaks HRVparams.PeakDetect.THRES = .6; % Default: 0.6, Energy threshold of the detector HRVparams.PeakDetect.fid_vec = []; % Default: [], If some subsegments should not be used for finding the optimal % threshold of the P&T then input the indices of the corresponding points here HRVparams.PeakDetect.SIGN_FORCE = []; % Default: [], Force sign of peaks (positive value/negative value) HRVparams.PeakDetect.debug = 0; % Default: 0 HRVparams.PeakDetect.ecgType = 'MECG'; % Default : MECG, options (adult MECG) or featl ECG (fECG) HRVparams.PeakDetect.windows = 15; % Befautl: 15,(in seconds) size of the window onto which to perform QRS detection %% 14. Entropy Settings % Multiscale Entropy HRVparams.MSE.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.MSE.windowlength = []; % Default: [], windows size in seconds, default perform MSE on the entire signal HRVparams.MSE.increment = []; % Default: [], window increment HRVparams.MSE.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.MSE.patternLength = 2; % Default: 2, pattern length HRVparams.MSE.maxCoarseGrainings = 20; % Default: 20, Maximum number of coarse-grainings % SampEn an ApEn HRVparams.Entropy.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.Entropy.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.Entropy.patternLength = 2; % Default: 2, pattern length %% 15. DFA Settings HRVparams.DFA.on = 1; % Default: 1, DFA Analysis 1=On or 0=Off HRVparams.DFA.windowlength = []; % Default [], windows size in seconds, default perform DFA on the entair signal HRVparams.DFA.increment = []; % Default: [], window increment HRVparams.DFA.minBoxSize = 4 ; % Default: 4, Smallest box width HRVparams.DFA.maxBoxSize = []; % Largest box width (default in DFA code: signal length/4) HRVparams.DFA.midBoxSize = 16; % Medium time scale box width (default in DFA code: 16) %% 16. Poincaré plot HRVparams.poincare.on = 1; % Default: 1, Poincare Analysis 1=On or 0=Off %% 17. Heart Rate Turbulence (HRT) - Settings HRVparams.HRT.on = 1; % Default: 1, HRT Analysis 1=On or 0=Off HRVparams.HRT.BeatsBefore = 2; % Default: 2, # of beats before PVC HRVparams.HRT.BeatsAfter = 16; % Default: 16, # of beats after PVC and CP HRVparams.HRT.GraphOn = 0; % Default: 0, do not plot HRVparams.HRT.windowlength = 24; % Default 24h, windows size in hours HRVparams.HRT.increment = 24; % Default 24h, sliding window increment in hours HRVparams.HRT.filterMethod = 'mean5before'; % Default mean5before, HRT filtering option %% 18. Output Settings HRVparams.gen_figs = 0; % Generate figures HRVparams.save_figs = 0; % Save generated figures if HRVparams.save_figs == 1 HRVparams.gen_figs = 1; end % Format settings for HRV Outputs HRVparams.output.format = 'csv'; % 'csv' - creates csv file for output % 'mat' - creates .mat file for output HRVparams.output.separate = 0; % Default : 1 = separate files for each subject % 0 = all results in one file HRVparams.output.num_win = []; % Specify number of lowest hr windows returned % leave blank if all windows should be returned % Format settings for annotations generated HRVparams.output.ann_format = 'binary'; % 'binary' = binary annotation file generated % 'csv' = ASCII CSV file generated end 
submitted by MisuzBrisby to matlab [link] [comments]

Video Encoding in Simple Terms

Video Encoding in Simple Terms
Nowadays, it is difficult to imagine a field of human activity, in which, in one way or another, digital video has not entered. We watch it on TV, mobile devices, and stationary computers; we record it with digital cameras ourselves, or we encounter it on the roads (unpleasant, but true), in stores, hospitals, schools and universities, and in industrial enterprises of various profiles. As a consequence, words and terms that are directly related to the digital representation of video information are becoming more firmly and widely embedded in our lives. From time to time, questions arise in this area. What are the differences between various devices or programs that we use to encode/ decode digital video data, and what do they do? Which of these devices/ programs are better or worse, and in which aspects? What do all these endless MPEG-2, H.264 / AVC, VP9, H.265 / HEVC, etc. mean? Let’s try to understand.

A very brief historical reference

The first generally accepted video compression standard MPEG-2 was finally adopted in 1996, after which a rapid development of digital satellite television began. The next standard was MPEG-4 part 10 (H.264 / AVC), which provides twice the degree of video data compression. It was adopted in 2003, which led to the development of DVB-T/ C systems, Internet TV and the emergence of a variety of video sharing and video communication services. From 2010 to 2013, the Joint Collaborative Team on Video Coding (JCT-VC) was intensively working to create the next video compression standard, which was called High Efficient Video Coding (HEVC) by the developers; it ensured the following twofold increase in the compression ratio of digital video data. This standard was approved in 2013. That same year, the VP9 standard, developed by Google, was adopted, which was supposed to not yield to HEVC in its degree of video data compression.

Basic stages of video encoding

There are a few simple ideas at the core of algorithms for video data compression. If we take some part of an image (in the MPEG-2 and AVC standards this part is called a macroblock), then there is a big possibility that, near this segment in this frame or in neighboring frames, there will be a segment containing a similar image, which differs little in pixel intensity values. Thus, to transmit information about the image in the current segment, it is enough to only transfer its difference from the previously encoded similar segment. The process of finding similar segments among previously encoded images is called Prediction. A set of difference values that determine the difference between the current segment and the found prediction is called the Residual. Here we can distinguish two main types of prediction. In the first one, the Prediction values represent a set of linear combinations of pixels adjacent to the current image segment on the left and on the top. This type of prediction is called Intra Prediction. In the second one, linear combinations of pixels of similar image segments from previously encoded frames are used as prediction (these frames are called Reference). This type of prediction is called Inter Prediction. To restore the image of the current segment, encoded with Inter prediction, when decoding, it is necessary to have information about not only the Residual, but also the frame number, where a similar segment is located, and the coordinates of this segment.
Residual values obtained during prediction obviously contain, on average, less information than the original image and, therefore, require a fewer quantity of bits for image transmission. To further increase the degree of compression of video data in video coding systems, some spectral transformation is used. Typically, this is Fourier cosine transform. Such transformation allows us to select the fundamental harmonics in two-dimensional Residual signal. Such a selection is made at the next stage of coding — quantization. The sequence of quantized spectral coefficients contains a small number of main, large values. The remaining values are very likely to be zero. As a result, the amount of information contained in quantized spectral coefficients is significantly (dozens of times) lower than in the original image.
In the next stage of coding, the obtained set of quantized spectral coefficients, accompanied by the information necessary for performing prediction when decoding, is subjected to entropy coding. The bottom line here is to align the most common values of the encoded stream with the shortest codeword (containing the smallest number of bits). The best compression ratio (close to theoretically achievable) at this stage is provided by arithmetic coding algorithms, which are mainly used in modern video compression systems.
From the above, the main factors affecting the effectiveness of a particular video compression system become apparent. First of all, these are, of course, the factors that determine the effectiveness of the Intra and Inter Predictions. The second set of factors is related to the orthogonal transformation and quantization, which selects the fundamental harmonics in the Residual signal. The third one is determined by the volume and compactness of the representation of additional information accompanying Residual and necessary for making predictions, that is, calculating Prediction, in the decoder. Finally, the fourth set has the factors that determine the effectiveness of the final stage- entropy coding.
Let’s illustrate some possible options (far from all) of the implementation of the coding stages listed above, on the example of H.264 / AVC and HEVC.

AVC Standard

In the AVC standard, the basic structural unit of the image is a macroblock — a square area of 16x16 pixels (Figure 1). When searching for the best possible prediction, the encoder can select one of several options of partitioning each macroblock. With Intra-prediction, there are three options: perform a prediction for the entire block as a whole, break the macroblock into four square blocks of 8x8 size, or into 16 blocks with a size of 4x4 pixels, and perform a prediction for each such block independently. The number of possible options of macroblock partitioning under Inter-prediction is much richer (Figure 1), which provides adaptation of the size and position of the predicted blocks to the position and shape of the object boundaries moving in the video frame.
Fig 1. Macroblocks in AVC and possible partitioning when using Inter-Prediction.
In AVC, pixel values from the column to the left of the predicted block and the row of pixels immediately above it are used for Intra prediction (Figure 2). For blocks of sizes 4x4 and 8x8, 9 methods of prediction are used. In a prediction called DC, all calculated pixels have a single value equal to the arithmetic average of the “neighbor pixels” highlighted in Fig. 2 with a bold line. In other modes, “angular” prediction is performed. In this case, the values of the “neighbor pixels” are placed inside the predicted block in the directions indicated in Fig. 2.
In the event that the predicted pixel gets between “neighbor pixels”, when moving in a given direction, an interpolated value is used for the prediction. For blocks with a size of 16x16 pixels, 4 methods of prediction are used. One of them is the DC-prediction, which was already reviewed. The other two correspond to the “angular” methods, with the directions of prediction 0 and 1. Finally, the fourth — Plane-prediction: the values of the predicted pixels are determined by the equation of the plane. The angular coefficients of the equation are determined by the values of the “neighboring pixels”.
Fig 2. “Neighboring pixels” and angular modes of Intra-Prediction in AVC
Inter- Prediction in AVC can be implemented in one of two ways. Each of these options determines the type of macroblock (P or B). As a prediction of pixel values in P-blocks (Predictive-blocks), the values of pixels from the area located on the previously coded (reference) image, are used. Reference images are not deleted from the RAM buffer, containing decoded frames (decoded picture buffer, or DPB), as long as they are needed for Inter-prediction. A reference list is created in the DPB from the indexes of these images.
The encoder signals to the decoder about the number of the reference image in the list and about the offset of the area used for prediction, with respect to the position of predicted block (this displacement is called motion vector). The offset can be determined with an accuracy of ¼ pixel. In case of prediction with non-integer offset, interpolation is performed. Different blocks in one image can be predicted by areas located on different reference images.
In the second option of Inter Prediction, prediction of the B-block pixel values (bi-predictive block), two reference images are used; their indexes are placed in two lists (list0 and list1) in the DPB. The two indexes of reference images in the lists and two offsets, that determine positions of reference areas, are transmitted to the decoder. The B-block pixel values are calculated as a linear combination of pixel values from the reference areas. For non-integer offsets, interpolation of reference image is used.
As already mentioned, after predicting the values of the encoded block and calculating the Residual signal, the next coding step is spectral transformation. In AVC, there are several options for orthogonal transformations of the Residual signal. When Intra-prediction of a whole macroblock with a size of 16x16 is implemented, the residual signal is divided into 4x4 pixel blocks; each of them is subjected to an integer analog of discrete two-dimensional 4x4 cosine Fourier transform.
The resulting spectral components, corresponding to zero frequency (DC) in each block, are then subjected to additional orthogonal Walsh-Hadamard transform. With Inter-prediction, the Residual signal is divided into blocks of 4x4 pixels or 8x8 pixels. Each block is then subjected to a 4x4 or 8x8 (respectively) two-dimensional discrete cosine Fourier Transform (DCT, from Discrete Cosine Transform).
In the next step, spectral coefficients are subjected to the quantization procedure. This leads to a decrease in bit capacity of digits representing the spectral sample values, and to a significant increase in the number of samples having zero values. These effects provide compression, i.e. reduce the number and bit capacity of digits representing the encoded image. The reverse side of quantization is the distortion of the encoded image. It is clear that the larger the quantization step, the greater is the compression ratio, but also the distortion is greater.
The final stage of encoding in AVC is entropy coding, implemented by the algorithms of Context Adaptive Binary Arithmetic Coding. This stage provides additional compression of video data without distortion in the encoded image.

Ten years later. HEVC standard: what’s new?

The new H.265/HEVC standard is the development of methods and algorithms for compressing video data embedded in H.264/AVC. Let’s briefly review the main differences.
An analog of a macroblock in HEVC is the Coding Unit (CU). Within each block, areas for calculation of Prediction are selected — Prediction Unit (PU). Each CU also specifies the limits within which the areas for calculating the discrete orthogonal transformation from the residual signal are selected. These areas are called the Transform Unit (TU).
The main distinguishing feature of HEVC here is that the split of a video frame into CU is conducted adaptively, so that it is possible to adjust the CU boundaries to the boundaries of objects on the image (Figure 3). Such adaptability allows to achieve an exceptionally high quality of prediction and, as a consequence, a low level of the residual signal.
An undoubted advantage of such an adaptive approach to frame division into blocks is also an extremely compact description of the partition structure. For the entire video sequence, the maximum and minimum possible CU sizes are set (for example, 64x64 is the maximum possible CU, 8x8 is the minimum). The entire frame is covered with the maximum possible CUs, left to right, top-to-bottom.
It is obvious that, for such coverage, transmission of any information is not required. If partition is required within any CU, then this is indicated by a single flag (Split Flag). If this flag is set to 1, then this CU is divided into 4 CUs (with a maximum CU size of 64x64, after partitioning we get 4 CUs of size 32x32 each).
For each of the CUs received, a Split Flag value of 0 or 1 can, in turn, be transmitted. In the latter case, this CU is again divided into 4 CUs of smaller size. The process continues recursively until the Split Flag of all received CUs is equal to 0 or until the minimum possible CU size is reached. Inserted CUs thus form a quad tree (Coding Tree Units, CTU). As already mentioned, within each CU, areas for calculating prediction- Prediction Units (PU) are selected. With Intra Prediction, the CU area can coincide with the PU (2Nx2N mode) or it can be divided into 4 square PUs of twice smaller size (NxN mode, available only for CU of minimum size). With Inter Prediction, there are eight possible options for partitioning each CU into PUs (Figure 3).
Fig.3 Video frame partitioning into CUs is conducted adaptively
The idea of spatial prediction in HEVC remained the same as in AVC. Linear combinations of neighboring pixel values, adjacent to the block on the left and above, are used as predicted sample values in the PU block. However, the set of methods for spatial prediction in HEVC has become significantly richer. In addition to Planar (analogue to Plane in AVC) and DC methods, each PU can be predicted by one of the 33 ways of “angular” prediction. That is, the number of ways, in which the values are calculated by “neighbor”-pixels, is increased by 4 times.
Fig. 4. Possible partitioning of the Coding Unit into Prediction Units with the spatial (Intra) and temporary (Inter) CU prediction modes
We can point out two main differences of Inter- prediction between HEVC and AVC. Firstly, HEVC uses better interpolation filters (with a longer impulse response) when calculating reference images with non-integer offset. The second difference concerns the way the information about the reference area, required by the decoder for performing the prediction, is presented. In HEVC, a “merge mode” is introduced, where different PUs, with the same offsets of reference areas, are combined. For the entire combined area, information about motion (motion vector) is transmitted in the stream once, which allows a significant reduction in the amount of information transmitted.
In HEVC, the size of the discrete two-dimensional transformation, to which the Residual signal is subjected, is determined by the size of the square area called the Transform Unit (TU). Each CU is the root of the TU quad tree. Thus, the TU of the upper level coincides with the CU. The root TU can be divided into 4 parts of half the size, each of which, in turn, is a TU and can be further divided.
The size of discrete transformation is determined by the TU size of the lower level. In HEVC, transforms for blocks of 4 sizes are defined: 4x4, 8x8, 16x16, and 32x32. These transformations are integer analogs of the discrete two-dimensional Fourier cosine transform of corresponding size. For size 4x4 TU with Intra-prediction, there is also a separate discrete transformation, which is an integer analogue of the discrete sine Fourier transform.
The ideas of the procedure of quantizing spectral coefficients of Residual signal, and also entropy coding in AVC and in HEVC, are practically identical.
Let’s note one more point which was not mentioned before. The quality of decoded images and the degree of video data compression are influenced significantly by post-filtering, which decoded images with Inter-prediction undergo before they are placed in the DPB.
In AVC, there is one kind of such filtering — deblocking filter. Application of this filter reduces the block effect resulting from quantization of spectral coefficients after orthogonal transformation of Residual signal.
In HEVC, a similar deblocking filter is used. Besides, an additional non-linear filtering procedure called the Sample Adaptive Offset (SAO) exists. Based on the analysis of pixel value distribution during encoding, a table of corrective offsets, added to the values of a part of CU pixels during decoding, is determined.
In HEVC, the size of the discrete two-dimensional transformation, to which the Residual signal is subjected, is determined by the size of the square area called the Transform Unit (TU). Each CU is the quad-tree of TU’s. Thus, the TU of the upper level coincides with the CU. The root TU can be divided into 4 parts of half the size, each of which, in turn, is a TU and can be further divided.
The size of discrete transformation is determined by the TU size of the lower level. There are four transform block sizes in HEVC: 4x4, 8x8, 16x16, and 32x32. These transforms are discrete two-dimensional Fourier cosine transform of corresponding size. For 4x4 Intra-predicted blocks, could be used another discrete transform — sine Fourier transform.
The quantization of spectral coefficients of residual signal, and entropy coding in AVC and in HEVC, are almost identical.
Let’s note one more point which was not mentioned before. The quality of decoded images, hence the degree of video data compression, is influenced significantly by post-filtering, which applied on decoded Inter-predicted images before they are placed in the DPB.
In AVC, there is one kind of such filtering — deblocking filter. It masking blocking artifacts effect originating from spectral coefficients quantization after orthogonal transformation of residual signal.
In HEVC, a similar deblocking filter is used. Besides, an additional non-linear filtering procedure called the Sample Adaptive Offset (SAO) exists. Sample level correction is based either on local neighborhood or on the intensity level of sample itself. Table of sample level corrections, added to the values of a part of CU pixels during decoding, is determined.

And what is the result?

Figures 4–7 show the results of encoding of several high-resolution (HD) video sequences by two encoders. One of the encoders compresses the video data in the H.265/HEVC standard (marked as HM on all the graphs), and the second one is in the H.264/AVC standard.
Fig. 5. Encoding results of the video sequence Aspen (1920x1080 30 frames per second)
Fig. 6. Encoding results of the video sequence BlueSky (1920x1080 25 frames per second)
Fig. 7. Encoding results of the video sequence PeopleOnStreet (1920x1080 30 frames per second)
Fig. 8. Encoding results of the video sequence Traffic (1920x1080 30 frames per second)
Coding was performed at different quantization values of spectral coefficients, hence with different levels of video image distortion. The results are presented in Bitrate (mbps) — PSNR(dB) coordinates. PSNR values characterize the degree of distortion.
On average, it can be stated that the PSNR range below 36 dB corresponds to a high level of distortion, i.e. low quality video images. The range of 36 to 40 dB corresponds to the average quality. With PSNR values above 40 dB, we can call it a high video quality.
We can roughly estimate the compression ratio provided by the encoding systems. In the medium quality area, the bit rate provided by the HEVC encoder is about 1.5 times less than the bit rate of the AVC encoder. Bitrate of an uncompressed video stream is easily determined as the product of the number of pixels in each video frame (1920 x 1080) by the number of bits required to represent each pixel (8 + 2 + 2 = 12), and the number of frames per second (30).
As a result, we get about 750 Mbps. It can be seen from the graphs that, in the area of average quality, the AVC encoder provides a bit rate of about 10–12 Mbit/s. Thus, the degree of video information compression is about 60–75 times. As already mentioned, the HEVC encoder provides compression ratio 1.5 times higher.

About the author

Oleg Ponomarev, 16 years in video encoding and signal digital processing, expert in Statistical Radiophysics, Radio waves propagation. Assistant Professor, PhD at Tomsk State University, Radiophysics department. Head of Elecard Research Lab.
submitted by VideoCompressionGuru to u/VideoCompressionGuru [link] [comments]

Making a super low cost trainer/dev kit. What do you wish you had in the kits/trainers you used to learn electronics?

Useless Backstory:
My original plan was to design a digital logic trainer for my students that could be submerged in alcohol without damage, to sanitize between classes. I did that and the prototypes work great (Other components for scale)
It's fair to assume the campus will close pretty quickly after the first spikes in covid cases. This means the original design won't be useful, students won't be in to share the equipment. Many departments plan to just gut their lab courses while some plan to throw huge tool/equipment costs at their students for at-home labs. I don't consider removing hands-on work a viable option, and equipment would cost a ton because the school store is terrible as far as where they can get products from, plus it takes its own cut of ~20%. The school store is the only way to pay for things with financial aid, so I have to go through them.
I priced everything out for my original design and discovered the board is so unbelievably cheap ($22 vs the $350 we pay for just ONE of the trainers the students use) that I plan to just make a new version that also includes all the features from the analog, processor, and plc trainers. Should cover everything from learning ohm's law to designing and testing amplifiers, from digital logic through assembly language up to C++/Python, from relay/ladder logic to PLC programming.
To the point:
For reference, here's a google image search of what I am designing a replacement for. Click on some at random and check the prices and specs. There's no reason they should cost hundreds. The ones that don't cost a ton are just switches and buttons and leds wired to headers - something anyone here can do for $10.
My goal is to add all the features from every single trainer I've seen/used but keep below 10% of the price of what is currently available. Each unit of equipment my students use (scopes, generators, supplies, digital/analog trainers, processor boards, plcs, etc) cost the department $5k+, and that's even after I got them to approve sparkfun as a vendor to save money. Assuming the students pick up shitty, low spec versions of everything for doing their labs at home, we're still looking at $1k. I like <$100 better, and would like the students to have something they can continue using to learn/develop electronics even after graduation.
So far I'm at $48 per trainer, completely assembled and in a case and I'm just about ready to make the next batch of prototypes but want to know what additional features I should cram into it.
What should I add that isn't listed below?
Supplies:
(1)+/- 19V 3A isolated supply
(2)+/- 5V 1.5A supplies
(1) 19V variable supply
(1) Constant-current linear regulated supply
(2) CV/CC switchmode supplies (fairly well filtered)
Power input is by default USB-C 20V/100W but I got impatient waiting on the USB-C sockets to come in the mail and rigged one up with a laptop DC jack (19.5V) for testing. I liked it. Most people have a box of old adapters in their house so I might just throw empty spots all around the back edge with the traces and pads for 10 different types of sockets so that anyone can use any supply they have lying around within the 18-34v 3A+ range. It already has overvoltage/undervoltage/overcurrent protection, adding a receiver for laptop signal pins that tell the system what the power brick is rated for would be easy.
There's also a USB micro-b port that can power everything but the analog supplies. It is also used for reprogramming firmware in the event of serious corruption, but updates and changes by default occur over wifi.
Outputs:
(1) 500mA Isolated Function Generator (12.5Mhz)
(1) function generator that acts as a 16.5V 1A CT transformer output (max 1mhz)
(2) digital clocks (1hz - 40khz)
(1) digital clock (1khz to 200mhz)
(24) 50mA 3-state digital outputs, protected from short circuits to any other line on the board, including the analog voltages. Each is configurable to a switch, button, low frequency clock, or tied to the PLC emulator or processor used for teaching programming.
Communication:
Wifi/Bluetooth, USB client and host, Modbus TCP/IP, Modbus RTU, CAN bus, i2c, i2s, spi, plus anything slow enough to be bitbanged will also be available as a feature through the UI, but not have a dedicated port. For example, you can load a 1-24 bit binary string in through the switches and shift it into 74000 series shift registers.
Inputs:
(4) Multimeters with 10mV precision, two of which are differential and isolated.
(24) 3-state digital inputs (+/- 20V capable, configurable logic levels)
(2) analog inputs (1Msps) - I hesitate to call it an oscilloscope because the next revision will include an FPGA that can actually handle huge amounts of data at high frequency. For now it dumps the data to a RAM IC and the main processor grabs a selection of addresses and renders a graph on the screen. There's no interrupts or anything that could get sub-clockcycle measurements on transitions directly from that data.
(2) 100mhz counters with automatic or adjustable trigger.
User Interface:
3.5" color touch screen - while every feature can be accessed from the touch screen, it's mostly for configuring things. I've made sure to put all features as physical buttons, switches, and knobs.
Wifi AP with captive portal - same access as the touch screen, but also used for uploading code to the processors (ASM ide and arduino ide) or PLC emulator (openplc). Working with a friend to help ensure mobile/tablet compatibility.
Bluetooth - available but not currently used.
Features:
IC testing with learning function - throw any common DIP chip into a socket and it will test whether it's fried. The UI also allows you to add in new chips, where you define which pins are inputs, outputs, power, ground, oscillator, analog, etc and whether you want it to automatically learn from every possible input configuration or a set sequence of commands. This includes i2c/spi chips.
Programming microcontrollers - throw a dip uC into the same socket as the ic tester and it'll configure itself to whatever pinout you define or select from a list. Already have a USB ISP for AVR but will add loads of ports matching the most popular in-system-programmers.
Matrix I/O sniffing - plug any matrix keypad or matrix led display into the I/O lines and it will automatically map them for you.
Communications sniffing - find IR remote codes, i2c addresses, RF codes, etc without external circuitry.
Compatibility with the Analog Discovery 2, Atmel ICE, LabView/Multisim, and I'm tinkering with SCPI to connect to bench equipment.
PLC Programming through OpenPLC.
Full diagnostic utility with schematic and fault indication through the UI. It will literally tell you what is wrong within a 1 centimetre radius on the board, show you the PCB/silkscreen of the area and optionally the schematic, and tell you what to replace to fix it. I added fault detection with port expanders, analog multiplexers, and dummy loads to help me test my original prototypes. It was supposed to be temporary but the work is already done and only added $5 to the total cost so now it's going to be in every future revision. Not a big jump to add pictures of every subcircuit PCB traces/silkscreen.
As an added note, when I'm done with each set of prototypes I plan to give them away on this subreddit for free, but I want to be sure there's no liability on my part. I'm concerned because all but the last version won't have UL/FCC/CE compliance. If anyone could direct me to information on this sort of thing, I'd really appreciate it. I'm thinking maybe I just directly call them "as-is" or defective or scrap?
submitted by -Mikee to arduino [link] [comments]

Factorio Multi Assembler

Factorio Multi Assembler
What do you want this factory to produce? Yes.
Multi Assembler in current multiplayer session

tl;dr;

I wanted to tinker around with the microcontroller mod and i "hate" the pre robotics gameplay when it comes to non bulk recipes (laser turrets, production buildings, specialized ammo...), handcrafting is slow, automation is tedious - so i engineered an factory design to produce virtually any recipe dynamically.

Demo Video

The production queue can be seen on the right with Q being the number of recipes queued at the moment.
https://streamable.com/ygnvs0

How does it work?

This screenshot provides an overview of the mostly vanilla proof of concept, only the microcontroller mod and the recipe combinator mod are required here.
Subsystem Overview
Resource provider
Source of raw resources (Iron, Wood...)
Multi Assembler
Dynamic assemblers with one microcontroller and two recipe combinators each, one reading the assemblers status, the other one setting the recipe delivered by the microcontroller, which in turn gets the recipe from the "wanted recipes" red signal network connecting the different subsystems.
Multi Assembler Microcontroller Code explained
  • See linked factorio forum post
Possible improvements / features
  • Avoid the "180 tick do while" and react to events instead, eg. inserter read hand content
  • Invert the sorting logic, removing the "set 2000" part in the code and making the red assembler network semantically more logical "the higher the signal the more i want this recipe"
Quirks and remarks
  • An mostly vanilla build as shown in the PoC above is not feasible for larger quantities, but should be possible if combined with techniques like sushi belting and increasing the initial delay of the "do while". This is not covered in the demo map as i am using the warehouse mod to work around this.
Recipe Logic
Defines what recipes can be produced based on the given resources and the recipes configured in the "production targets" constant combinators.In essence this subsystem will emit a constant signal of "1" for each recipe which a.) should and b.) can be produced to the red multi assembler network.
At the moment this subsystem is rather basic and can be improved upon (see quirks and remarks).
Recipe Logic Microcontroller Code (TOP) explained
  • See linked factorio forum post
Possible improvements / features
  • Add configurable recipe priorities aka "I want laser turrets before walls, and belts before everything else"
  • Better recipe priorities based on recipe complexity / production targets, "I want 5 assemblers to produce cables needed in bulk for circuits, while i only want one assembler at max producing power armor"
  • Possible solution: Calculate the priority based on the distance to the production target. The higher the difference between production target and in stock items, the lower the signal to the red multi assembler network.
Quirks and remarks
  • If intermediate products go missing or cannot be produced (say you manually provide blue circuits, and remove them again after an recipe with blue circuits was added to the production queue), the recipe will be stuck indefinitely in the production queue. In order to solve this, simply reset the cache combinator of this subsystem.
  • Items with large stack sizes may lead to problems if the steel chest contains less than (number of assemblers * item stack size + 1). That's because the assemblers will "eat up" all the resources of the steel chest, which in turn leads to the system thinking no resources of this type are available, and thus aborting the production.
  • Slow raw resource input or intermediate recipe production will lead to an slow flipping binary state of "I can produce this higher tier recipe" and "I no longer have enough resources for this recipe", ultimately this is a resource input problem, but it could be handled in a more graceful way for other queued recipes.
  • Depending on the setup, production targets are not hit exactly because of an production target evaluation delay when checking if the recipe should still be produced, in some cases this leads to overproduction.
Production Target Constant Combinators
Add the recipes you want the Multi Assembler to produce here. The quantity defines the production target.
Missing Resource Indicator
Will flash red if any resources required to produce an recipe are missing in the steel chest of the multi assembler.The missing resources are shown as positive values in the combinator to the right of the flashing light.
Production Queue Visualizer
Optional component, simply visualizes the amount of the currently queued recipes.

Download & Blueprints

See my post at https://forums.factorio.com/viewtopic.php?f=8&t=85141
I am new to reddit and couldn't figure out an way to post them here without adding way to many lines to this post, maybe someone can enlighten me if there is some kind of "single line code" option?
PS: I am not a native speaker, if you need clarification on some parts feel free to ask.
submitted by heximal2A to factorio [link] [comments]

more related issues


more related issues
in the conversion of old and new systems, the most difficult one is uuuuuuuuuuuuuuu.

  1. Among the following options, the one that does not belong to the combination of two parameters, one change and three combinations:
    the form control that can accept numerical data input is.

Internal gateway protocol is divided into: distance vector routing protocol, and hybrid routing protocol.

Firewall can prevent the transmission of infected software or files
among the following coupling types, the lowest coupling degree is ().

The () property of the Navigator object returns the platform and version information of the browser.

What are the main benefits of dividing IP subnets? ()
if users want to log in to the remote server and become a simulation terminal of the remote server temporarily, they can use the
[26-255] software life cycle provided by the remote host, which means that most operating systems, such as DOS, windows, UNIX, etc., adopt tree structureFolder structure.

An array is a group of memory locations related by the fact that they all have __________ name and __________ Type.
in Windows XP, none of the characters in the following () symbol set can form a file name. [2008 vocational college]
among the following options, the ones that do not belong to the characteristics of computer viruses are:
in the excel 2010 cell Format dialog box, the nonexistent tab is
the boys___ The teacher talked to are from class one.
for an ordered table with length of 18, if the binary search is used, the length of the search for the 15th element is ().

SRAM memory is______ Memory.

() is a website with certain complementary advantages. It places the logo or website name of the other party's website on its own website, and sets the hyperlink of each other's website, so that users can find their own website from the cooperative website and achieve the purpose of mutual promotion.

  1. Accounting qualification is managed by information technology ()
    which of the following devices can forward the communication between different VLANs?

The default port number of HTTP hypertext transfer protocol is:
forIn the development method of object, () will be the dominant standard modeling language in the field of object-oriented technology.

When you visit a website, what is the first page you see?

File D:\\ city.txt The content is as follows: Beijing Tianjin Shanghai Chongqing writes the following event process: privatesub form_ click() Dim InD Open \d:\\ city.txt \For input as ? 1 do while not EOF (1) line input ? 1, Ind loop close 1 print ind End Sub run the program, click the form, and the output result is.

When users use dial-up telephone lines to access the Internet, the most commonly used protocol is.

In the I2C system, the main device is usually taken by the MCU with I2C bus interface, and the slave device must have I2C bus interface.

The basic types of market research include ()
the function of the following program is: output all integers within 100 that can be divisible by 3 and have single digits of 6. What should be filled in the underline is (). 56b33287e4b0e85354c031b5. PNG
the infringement of the scope of intellectual property rights is:
multimedia system is a computer that can process sound and image interactivelySystem.

In order to allow files of different users to have the same file name, () is usually used in the file system.

The following () effects are not included in PowerPoint 2010 animation effects.

Macro virus can infect________ Documents.

The compiled Java program can be executed directly.

In PowerPoint, when adding text to a slide with AutoShape, how to indicate that text can be edited on the image when an AutoShape is selected ()
organizational units can put users, groups, computers and other units into the container of the active directory.

Ethernet in LAN adopts the combination technology of packet switching and circuit switching. ()
interaction designers need to design information architecture and interface details.

In the process of domain name resolution, the local domain name server queries the root domain name server by using the search method.

What stage of e-commerce system development life cycle does data collection and processing preparation belong to?

Use the "ellipse" tool on the Drawing toolbar of word, press the () key and drag the mouse to draw a circle.

The proportion of a country's reserve position in the IMF, including the convertible currency part of the share subscribed by Member States to the IMF, and the portion that can be paid in domestic currency, respectively.

  1. When installing Windows 7 operating system, the system disk partition must be in format before installation.

High rise buildings, public places of entertainment and other decoration, in order to prevent fire should be used____。 ()
with regard to the concept of area in OSPF protocol, what is wrong in the following statements is ()
suppose that the channel bandwidth is 4000Hz and the modulation is 256 different symbols. According to the Nyquist theorem, the data rate of the ideal channel is ()
which of the following is the original IEEE WLAN standard ()?

What is correct about data structure is:
the key deficiency of waterfall model is that ().

The software development mode with almost no product plan, schedule and formal development process is
in the following description of computers, the correct one is ﹥
Because human eyes are sensitive to chroma signal, the sampling frequency of luminance signal can be lower than that of chroma signal when video signal is digitized, so as to reduce the amount of digital video data.

[47-464] what is correct in the following statements is
ISO / IEC WG17 is responsible for the specific drafting, discussion, amendment, formulation, voting and publication of the final ISO international standards for iso14443, iso15693 and iso15693 contactless smart lock manufacturers smart card standards.

Examples of off - balance - sheet activities include _________

The correct description of microcomputer is ().

Business accident refers to the accident caused by the failure of operation mechanism of tourism service department. It can be divided into ().

IGMP Network AssociationWhat is the function of the discussion?

Using MIPS as the unit to measure the performance of the computer, it refers to the computer______

In the excel workbook, after executing the following code, the value of cell A3 of sheet 1 is________ Sub test1() dim I as integer for I = 1 to 5 Sheet1. Range (\ \ a \ \ & I) = I next inend sub
What are the characteristics of electronic payment compared with traditional payment?

When the analog signal is encoded by linear PCM, the sampling frequency is 8kHz, and the code energy control unit is 8 bits, then the information transmission rate is ()
  1. The incorrect discussion about the force condition of diesel engine connecting rod is.

Software testing can be endless.

The game software running on the windows platform of PC is sent to the mobile phone of Android system and can run normally.

The following is not true about the video.

The way to retain the data in the scope of request is ()
distribution provides the basis and support for the development of e-commerce.

  1. Which of the following belong to the content of quality control in the analysis
    1. During the operation of a program, the CNC system appears "soft limit switch overrun", which belongs to
    2. The wrong description of the gas pipe is ()
    3. The following statement is wrong: ()
    the TCP / IP protocol structure includes () layer.

Add the records in table a to table B, and keep the original records in table B. the query that should be used is.

For additives with product anti-counterfeiting certification mark, after confirming that the product is in conformity with the factory quality certificate and the real object, one copy () shall be taken and pasted on the ex factory quality certificate of the product and filed together.

() accounts are disabled by default.

A concept of the device to monitor a person's bioparameters is that it should.
  1. For the cephalic vein, the wrong description is
    an image with a resolution of 16 pixels × 16 pixels and a color depth of 8 bits, with the data capacity of at least______ Bytes. (0.3 points)
  2. What are the requirements for the power cord of hand-held electric tools?

In the basic mode of electronic payment, credit card belongs to () payment system.

The triode has three working states: amplification, saturation and cut-off. In the digital circuit, when the transistor is used as a switch, it works in two states of saturation or cut-off.

Read the attached article and answer the following: compared with today's music, those of the past
() refers to the subjective conditions necessary for the successful completion of an activity.

In the OSI reference model, what is above the network layer is_______ 。

The decision tree corresponding to binary search is not only a binary search tree, but also an ideal balanced binary tree. In order to guide the interconnection, interoperability and interoperability of computer networks, ISO has issued the OSI reference model, and its basic structure is divided into
26_______ It belongs to the information system operation document.

In C ? language, the following operators have the highest priority___ ?
the full Chinese name of BPR is ()
please read the following procedures: dmain() {int a = 5, B = 0, C = 0; if (a = B + C) printf (\ * * \ n \); else printf (\ $$n \);} the above programs
() software is not a common tool for web page making.

When a sends a message to B, in order to achieve security, a needs to encrypt the message with ().

The Linux exchange partition is used to save the visited web page files.

  1. Materials consumed by the basic workshop may be included in the () cost item.

The coverage of LAN is larger than that of Wan.

Regarding the IEEE754 standard of real number storage, the wrong description is______

Task 4: convert decimal number to binary, octal and hexadecimal number [Topic 1] (1134.84375) 10 = () 2=()8 = () 16
the purpose of image data compression is to ()
in IE browser, to view the frequently visited sites that have been saved, you need to click.

  1. When several companies jointly write a document, the document number of each company should be quoted in the header at the same time. ()
    assuming that the highest frequency of analog signal is 10MHz, and the sampling frequency must be greater than (), then the sample signal can not be distorted.

The incredible performing artist from Toronto.
in access, the relationship between a table and a database is.

In word 2010, the following statement about the initial drop is correct.

Interrupt service sub function does not need to be called in the program, but after applying for interrupt, the CPU automatically finds the corresponding program according to the interrupt number.

Normal view mode is the default view mode for word documents.

A common variable is defined as follows: Union data {int a; int b; float C;} data; how much memory space does the variable data occupy in VC6.0?

______ It is not a relational database management system.

In the basic model of decision support system, what is in the core position is:
among the following key factors of software outsourcing projects, () is the factor that affects the final product quality and production efficiency of software outsourcing.

Word Chinese textThe shortcut for copying is ().
submitted by Amanda2020-jumi to u/Amanda2020-jumi [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 2a

Here is the first part of the second part (I ran out of characters again...) of a series of posts documenting the many differences between LISP 1.5 and Common Lisp. The preceding post can be found here.
In this part we're going to look at LISP 1.5's library of functions.
Of the 146 symbols described in The LISP 1.5 Programmer's Manual, sixty-two have the same names as standard symbols in Common Lisp. These symbols are enumerated here.
The symbols t and nil have been discussed already. The remaining symbols are operators. We can divide them into groups based on how semantics (and syntax) differ between LISP 1.5 and Common Lisp:
  1. Operators that have the same name but have quite different meanings
  2. Operators that have been extended in Common Lisp (e.g. to accept a variable number of arguments), but that otherwise have similar enough meanings
  3. Operators that have remained effectively the same
The third group is the smallest. Some functions differ only in that they have a larger domain in Common Lisp than in LISP 1.5; for example, the length function works on sequences instead of lists only. Such functions are pointed out below. All the items in this list should, given the same input, behave identically in Common Lisp and LISP 1.5. They all also have the same arity.
These are somewhat exceptional items on this list. In LISP 1.5, car and cdr could be used on any object; for atoms, the result was undefined, but there was a result. In Common Lisp, applying car and cdr to anything that is not a cons is an error. Common Lisp does specify that taking the car or cdr of nil results in nil, which was not a feature of LISP 1.5 (it comes from Interlisp).
Common Lisp's equal technically compares more things than the LISP 1.5 function, but of course Common Lisp has many more kinds of things to compare. For lists, symbols, and numbers, Common Lisp's equal is effectively the same as LISP 1.5's equal.
In Common Lisp, expt can return a complex number. LISP 1.5 does not support complex numbers (as a first class type).
As mentioned above, Common Lisp extends length to work on sequences. LISP 1.5's length works only on lists.
It's kind of a technicality that this one makes the list. In terms of functionality, you probably won't have to modify uses of return---in the situations in which it was used in LISP 1.5, it worked the same as it would in Common Lisp. But Common Lisp's definition of return is really hiding a huge difference between the two languages discussed under prog below.
As with length, this function operates on sequences and not only lists.
In Common Lisp, this function is deprecated.
LISP 1.5 defined setq in terms of set, whereas Common Lisp makes setq the primitive operator.
Of the remaining thirty-three, seven are operators that behave differently from the operators of the same name in Common Lisp:
  • apply, eval
The connection between apply and eval has been discussed already. Besides setq and prog or special or common, function parameters were the only way to bind variables in LISP 1.5 (the idea of a value cell was introduced by Maclisp); the manual describes apply as "The part of the interpreter that binds variables" (p. 17).
  • compile
In Common Lisp the compile function takes one or two arguments and returns three values. In LISP 1.5 compile takes only a single argument, a list of function names to compile, and returns that argument. The LISP 1.5 compiler would automatically print a listing of the generated assembly code, in the format understood by the Lisp Assembly Program or LAP. Another difference is that compile in LISP 1.5 would immediately install the compiled definitions in memory (and store a pointer to the routine under the subr or fsubr indicators of the compiled functions).
  • count, uncount
These have nothing to do withss Common Lisp's count. Instead of counting the number of items in a collection satisfying a certain property, count is an interface to the "cons counter". Here's what the manual says about it (p. 34):
The cons counter is a useful device for breaking out of program loops. It automatically causes a trap when a certain number of conses have been performed.
The counter is turned on by executing count[n], where n is an integer. If n conses are performed before the counter is turned off, a trap will occur and an error diagnostic will be given. The counter is turned off by uncount[NIL]. The counter is turned on and reset each time count[n] is executed. The counter can be turned on so as to continue counting from the state it was in when last turned off by executing count[NIL].
This counting mechanism has no real counterpart in Common Lisp.
  • error
In Common Lisp, error is part of the condition system, and accepts a variable number of arguments. In LISP 1.5, it has a single, optional argument, and of course LISP 1.5 had no condition system. It had errorset, which we'll discuss later. In LISP 1.5, executing error would cause an error diagnostic and print its argument if given. While this is fairly similar to Common Lisp's error, I'm putting it in this section since the error handling capabilities of LISP 1.5 are very limited compared to those of Common Lisp (consider that this was one of the only ways to signal an error). Uses of error in LISP 1.5 won't necessarily run in Common Lisp, since LISP 1.5's error accepted any object as an argument, while Common Lisp's error needs designators for a simple-error condition. An easy conversion is to change (error x) into (error "~A" x).
  • map
This function is quite different from Common Lisp's map. The incompatibility is mentioned in Common Lisp: The Language:
In MacLisp, Lisp Machine Lisp, Interlisp, and indeed even Lisp 1.5, the function map has always meant a non-value-returning version. However, standard computer science literature, including in particular the recent wave of papers on "functional programming," have come to use map to mean what in the past Lisp implementations have called mapcar. To simplify things henceforth, Common Lisp follows current usage, and what was formerly called map is named mapl in Common Lisp.
But even mapl isn't the same as map in LISP 1.5, since mapl returns the list it was given and LISP 1.5's map returns nil. Actually there is another, even larger incompatibility that isn't mentioned: The order of the arguments is different. The first argument of LISP 1.5's map was the list to be mapped and the second argument was the function to map over it. (The order was changed in Maclisp, likely because of the extension of the mapping functions to multiple lists.) You can't just change all uses of map to mapl because of this difference. You could define a function like map-1.5,such as
(defun map-1.5 (list function) (mapl function list) nil) 
and replace map with map-1.5 (or just shadow the name map).
  • function
This operator has been discussed earlier in this post.
Common Lisp doesn't need anything like LISP 1.5's function. However, mostly by coincidence, it will tolerate it in many cases; in particular, it works with lambda expressions and with references to global function definitions.
  • search
This function isn't really anything like Common Lisp's search. Here is how it is defined in the manual (p. 63, converted from m-expressions into Common Lisp syntax):
(defun search (x p f u) (cond ((null x) (funcall u x)) ((p x) (funcall f x)) (t (search (cdr x) p f u)))) 
Somewhat confusingly, the manual says that it searches "for an element that has the property p"; one might expect the second branch to test (get x p).
The function is kind of reminiscent of the testr function, used to exemplify LISP 1.5's indefinite scoping in the previous part.
  • special, unspecial
LISP 1.5's special variables are pretty similar to Common Lisp's special variables—but only because all of LISP 1.5's variables are pretty similar to Common Lisp's special variables. The difference between regular LISP 1.5 variables and special variables is that symbols declared special (using this special special special operator) have a value on their property list under the indicator special, which is used by the compiler when no binding exists in the current environment. The interpreter knew nothing of special variables; thus they could be used only in compiled functions. Well, they could be used in any function, but the interpreter wouldn't find the special value. (It appears that this is where the tradition of Lisp dialects having different semantics when compiled versus when interpreted began; eventually Common Lisp would put an end to the confusion.)
You can generally change special into defvar and get away fine. However there isn't a counterpart to unspecial. See also common.
Now come the operators that are essentially the same in LISP 1.5 and in Common Lisp, but have some minor differences.
  • append
The LISP 1.5 function takes only two arguments, while Common Lisp allows any number.
  • cond
In Common Lisp, when no test in a cond form is true, the result of the whole form is nil. In LISP 1.5, an error was signaled, unless the cond was contained within a prog, in which case it would quietly do nothing. Note that the cond must be at the "top level" inside the prog; cond forms at any deeper level will error if no condition holds.
  • gensym
The LISP 1.5 gensym function takes no arguments, while the Common Lisp function does.
  • get
Common Lisp's get takes three arguments, the last of which is a value to return if the symbol does not have the indicator on its property list; in LISP 1.5 get has no such third argument.
  • go
In LISP 1.5 go was allowed in only two contexts: (1) at the top level of a prog; (2) within a cond form at the top level of a prog. Later dialects would loosen this restriction, leading to much more complicated control structures. While progs in LISP 1.5 were somewhat limited, it is at least fairly easy to tell what's going on (e.g. loop conditions). Note that return does not appear to be limited in this way.
  • intern
In Common Lisp, intern can take a second argument specifying in what package the symbol is to be interned, but LISP 1.5 does not have packages. Additionally, the required argument to intern is a string in Common Lisp; LISP 1.5 doesn't really have strings, and so intern instead wants a pointer to a list of full words (of packed BCD characters; the print names of symbols were stored in this way).
  • list
In Common Lisp, list can take any number of arguments, including zero, but in LISP 1.5 it seems that it must be given at least one argument.
  • load
In LISP 1.5, load can't be given a filespec as an argument, for many reason. Actually, it can't be given anything as an argument; its purpose is simply to hand control over to the loader. The loader "expects octal correction cards, 704 row binary cards, and a transfer card." If you have the source code that would be compiled into the material to be loaded, then you can just put it in another file and use Common Lisp's load to load it in. But if you don't have the source code, then you're out of luck.
  • mapcon, maplist
The differences between Common Lisp and LISP 1.5 regarding these functions are similar to those for map given above. Both of these functions returned nil in LISP 1.5, and they took the list to be mapped as their first argument and the function to map as their second argument. A major incompatibility to note is that maplist in LISP 1.5 did what mapcar in Common Lisp does; Common Lisp's maplist is different.
  • member
In LISP 1.5, member takes none of the fancy keyword arguments that Common Lisp's member does, and returns only a truth value, not the tail of the list.
  • nconc
In LISP 1.5, this function took only two arguments; in Common Lisp, it takes any number.
  • prin1, print, terpri
In Common Lisp, these functions take an optional argument specifying an output stream to which they will send their output, but in LISP 1.5 prin1 and print take just one argument, and terpri takes no arguments.
  • prog
In LISP 1.5, the list of program variables was just that: a list of variables. No initial values could be provided as they can in Common Lisp; all the program variables started out bound to nil. Note that the program variables are just like any other variables in LISP 1.5 and have indefinite scope.
In the late '70s and early '80s, the maintainers of Maclisp and Lisp Machine Lisp wanted to add "naming" abilities to prog. You could say something like
(prog outer () ... (prog () (return ... outer))) 
and the return would jump not just out of the inner prog, but also out of the outer one. However, they ran into a problem with integrating a named prog with parts of the language that were based on prog. For example, they could add a special case to dotimes to handle an atomic first argument, since regular dotimes forms had a list as their first argument. But Maclisp's do had two forms: the older (introduced in 1969) form
(do atom initial step-form end-test body...) 
and the newer form, which was identical to Common Lisp's do. The older form was equivalent to
(do ((atom intitial step-form)) (end-test) body...) 
Since the older form was still supported, they couldn't add a special case for an atomic first argument because that was the normal case of the older kind of do. They ended up not adding named prog, owing to these kinds of difficulties.
However, during the discussion of how to make named prog work, Kent Pitman sent a message that contained the following text:
I now present my feelings on this issue of how DO/PROG could be done in order this haggling, part of which I think comes out of the fact that these return tags are tied up in PROG-ness and so on ... Suppose you had the following primitives in Lisp: (PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags. Returns () if you fall off the end. RETURN does not work from this form. (PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and if a RETURN-FROM specifying name (or just a RETURN) were executed, control would pass to here. Returns the value of form if form returns normally or the value returned from it if a RETURN or RETURN-FROM is executed. [Note: this is not a [*]CATCH because it is lexical in nature and optimized out by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT and UNNAMED-PROG-RETURN-POINT might be desirable – extrapolate for yourself how this would change things – I'll just present the basic idea here.] (ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow return or goto. All forms are evaluated. GO does not work to get to any form in the iteration body. So then we could just say that the definitions for PROG and DO might be (ignore for now old-DO's – they could, of course, be worked in if people really wanted them but they have nothing to do with this argument) ... (PROG [  ]  . ) => (PROG-RETURN-POINT (LET  (PROG-BODY . )) [  ]) (DO [  ]   . ) => (PROG-RETURN-POINT (ITERATE   (PROG-BODY . )) [  ]) Other interesting combinations could be formed by those interested in them. If these lower-level primitives were made available to the user, he needn't feel tied to one of PROG/DO – he can assemble an operator with the functionality he really wants. 
Two years later, Pitman would join the team developing the Common Lisp language. For a little while, incorporating named prog was discussed, which eventually led to the splitting of prog in quite a similar way to Pitman's proposal. Now prog is a macro, simply combining the three primitive operators let, block, and tagbody. The concept of the tagbody primitive in its current form appears to have been introduced in this message, which is a writeup by David Moon of an idea due to Alan Bawden. In the message he says
The name could be GO-BODY, meaning a body with GOs and tags in it, or PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning something inside of which GO may be used. I don't care; suggestions anyone?
Guy Steele, in his proposed evaluator for Common Lisp, called the primitive tagbody, which stuck. It is a little bit more logical than go-body, since go is just an operator and allowed anywhere in Common Lisp; the only special thing about tagbody is that atoms in its body are treated as tags.
  • prog2
In LISP 1.5, prog2 was really just a function that took two arguments and returned the result of the evaluation of the second one. The purpose of it was to avoid having to write (prog () ...) everywhere when all you want to do is call two functions. In later dialects, progn would be introduced and the "implicit progn" feature would remove the need for prog2 used in this way. But prog2 stuck around and was generalized to a special operator that evaluated any number of forms, while holding on to the result of the second one. Programmers developed the (prog2 nil ...) idiom to save the result of the first of several forms; later prog1 was introduced, making the idiom obsolete. Nowadays, prog1 and prog2 are used typically for rather special purposes.
Regardless, in LISP 1.5 prog2 was machine-coded subroutine that was equivalent to the following function definition in Common Lisp:
(defun prog2 (one two) two) 
  • read
The read function in LISP 1.5 did not take any arguments; Common Lisp's read takes four. In LISP 1.5, read read either from "SYSPIT" or from the punched carded reader. It seems that SYSPIT stood for "SYStem Paper (maybe Punched) Input Tape", and that it designated a punched tape reader; alternatively, it might designate a magnetic tape reader, but the manual makes reference to punched cards. But more on input and output later.
  • remprop
The only difference between LISP 1.5's remprop and Common Lisp's remprop is that the value of LISP 1.5's remprop is always nil.
  • setq
In Common Lisp, setq takes an arbitrary even number of arguments, representing pairs of symbols and values to assign to the variables named by the symbols. In LISP 1.5, setq takes only two arguments.
  • sublis
LISP 1.5's sublis and subst do not take the keyword arguments that Common Lisp's sublis and subst take.
  • trace, untrace
In Common Lisp, trace and untrace are operators that take any number of arguments and trace the functions named by them. In LISP 1.5, both trace and untrace take a single argument, which is a list of the functions to trace.

Functions not in Common Lisp

We turn now to the symbols described in the LISP 1.5 Programmer's Manual that don't appear in Common Lisp. Let's get the easiest case out of the way first: Here are all the operators in LISP 1.5 that have a corresponding operator in Common Lisp, with notes about differences in functionality where appropriate.
  • add1, sub1
These functions are the same as Common Lisp's 1+ and 1- in every way, down to the type genericism.
  • conc
This is just Common Lisp's append, or LISP 1.5's append extended to more than two arguments.
  • copy
Common Lisp's copy-list function does the same thing.
  • difference
This corresponds to -, although difference takes only two arguments.
  • divide
This function takes two arguments and is basically a consing version of Common Lisp's floor:
(divide x y) = (values-list (floor x y)) 
  • digit
This function takes a single argument, and is like Common Lisp's digit-char-p except that the radix isn't variable, and it returns a true or false value only (and not the weight of the digit).
  • efface
This function deletes the first appearance of an item from a list. A call like (efface item list) is equivalent to the Common Lisp code (delete item list :count 1).
  • greaterp, lessp
These correspond to Common Lisp's > and <, although greaterp and lessp take only two arguments.
As a historical note, the names greaterp and lessp survived in Maclisp and Lisp Machine Lisp. Both of those languages had also > and <, which were used for the two-argument case; Common Lisp favored genericism and went with > and < only. However, a vestige of the old predicates still remains, in the lexicographic ordering functions: char-lessp, char-greaterp, string-lessp, string-greaterp.
  • minus
This function takes a single argument and returns its negation; it is equivalent to the one-argument case of Common Lisp's -.
  • leftshift
This function is the same as ash in Common Lisp; it takes two arguments, m and n, and returns m×2n. Thus if the second argument is negative, the shift is to the right instead of to the left.
  • liter
This function is identical in essence to Common Lisp's alpha-char-p, though more precisely it's closer to upper-case-p; LISP 1.5 was used on computers that made no provision for lowercase characters.
  • pair
This is equivalent to the normal, two-argument case of Common Lisp's pairlis.
  • plus
This function takes any number of arguments and returns their sum; its Common Lisp counterpart is +.
  • quotient
This function is equivalent to Common Lisp's /, except that quotient takes only two arguments.
  • recip
This function is equivalent to the one-argument case of Common Lisp's /.
  • remainder
This function is equivalent to Common Lisp's rem.
  • times
This function takes any number of arguments and returns their product; its Common Lisp counterpart is *.
Part 2b will be posted in a few hours probably.
submitted by kushcomabemybedtime to lisp [link] [comments]

[Spoilers] So, I promised to write a tirade on what I think is wrong with CDDA, and how I'd refocus the game on a GD level, and here it is.

And do not get me wrong, it absolutely still is my #1 favorite game, it just has some.. really, really major glaring flaws. Let me pre-emptively apologize for how meandery this post is, and warn you that if you never got far in the game, you might want to avoid the spoilers. If you don't want to read all of it, please read the "What is wrong with CDDA" section, and the "tl;dSummary" one, they are the most important outline of what I'm talking about, the rest can be a bit incoherent/implausible.
I would also like to ping mlangsdorf, and kevingranade, as well as Raskov75 and TechnicalBen who have shown interest in this topic when I mentioned it in another thread a few days ago.
I have difficulty keeping my mind on track on my own, so if you asked me pointed question, I could probably come up with better than the idealized thoughts below.
I would be grateful to anyone who reads it.

What is wrong with CDDA.

In a way, I think that CDDA is a game that kinda hinges entirely on its complexity and amount of content, rather than utilizing it cleverly. I absolutely adore some aspects of it: The way crafting supports alternative materials to add depth to resource management, the systemic repaireinforcement/modification of some items, how much you can do with vehicles, and I truly love the earlygame, and I loved figuring the game out too, but I wish it had lasted longer, far longer. Once you know what to do, the game loses most of its depth.
First off, the problem with progression in cdda is, rather than a set or graph of fuzzy progression milestones, that you can revisit and do better, it's more of a checklist. Of tools, of books, of skill levels, and sadly, most of that reduces to an extremely routine process of surviving the earlygame, and then just accumulating books+tools and enough food to coast by, until you're ready to level up and leave the early(and mid) game behind. And most of that progression reduces to a single central measure. You either get stronger through an action, or you don't, there is mostly no real "sideways" progression.
It's a common complaint I have with RPG games, and admittedly, Cata does far better on this front than they do, but it's still kinda bad, especially starting a new character - in a game with long-term progression like CDDA, when starting a new character, you have two options, either go through the same methodology from scratch, or... yup, just read your last character's books before butchering it for bionics. Neither is great fun.
And furthermore, as you progress, you just... leave content behind. You quickly reach a point where normal zombies, and even the brutes, mean nothing to you, much less the animals or the woods. Most of the world just goes off your mental map, as irrelevant. From that point, there is nothing you have genuine "reason" to do, beyond just your own whim. Once you know how to stay safe, the main endgame location, labs, are honestly trivial, and once you figure a certain item out, they stop even being capable of posing a risk unless you get brutally careless.

What I'd do instead

And some of these changes are gonna be... major, some implausible at this point into the game's development. Nonetheless, please treat the below as food for thought, rather than anything more definitive. I also struggled a lot to order and organize this, so forgive me.

1. Skills

First off, and you'll see why I'm proposing this in subsequent sections: IMO, skills would be far better, if they were split into individual "microskills", e.g. Electronics would be a "field", rather than a "skill", which would contain individual subskills such as soldering, signal processing, power, basic/intermediate/advanced circuit theory, microprocessors, bionics, etc.
Furthermore, rather than have a single level, each skill would have three sequential components, the proportion of each depending on the skill in question: Concept, theory, practice. A well-educated human, for example, might know the concept behind basic mechanics, and thus be able to - eventually - improvise upon it, or figure out the outline of basic electronics by studying an advanced book, but on the other hand, just reading an electronics 101 doesn't instantly make you an expert on soldering.
And yes, I'm aware that that sounds like a huge pain in the ass to manage, which brings me to the rationale behind it: I think that expecting and requiring a sole survivor to become fully self-sufficient and capable of all, on their own, is batshit, which brings me to the second point:

2. Survivors

2.1. Interactions and knowledge.

IMO, in an open-ended game, there is only one way to do dialogue, namely through a topic system much like Morrowind's, where your top-level interface uses fixed hotkeys, for main "verbs" such as "Talk about...", "Tasks", "Trade", "Training"(both ways), "Rules", "Goodbye", and then subscreens which feature the actual options, where you should be able to ask the NPC about cities they have visited, or the one they come from, to gather information, about other landmarks or creatures/species/people they encountered.
The system does not need to be elaborate, but it needs to be organized, and capable of supporting simple systematic communication of knowledge, ideally both ways, as well as how it affects your reputation. Caves of Qud has a great system1
Aside from skills, NPCs should have other "knowledge", such as that about cities, creatures, or that you murdered their companion and they hate you for it, or that they have a health problems they need you to fix(or try to fix themselves, if they spot the right item), that interact and affect their behaviodialogue in at least basic ways. I have no idea how far such a system could be taken, so I'll not propose anything further.
1 possibly based on a long-ass suggestion post I pitched to the dev years ago, but I'm very probably just giving myself airs

2.2. Pooling resources together

Instead of singular player characters that exist in a vaccuum, fully capable of becoming an expert at everything through the previous character's books, I would base the game itself around creating a faction of NPCs with distinct backgrounds and skills, and the ability to learn and teach each other. Many crafts would take more time, but rather than being executed by the PC, they would be done by the NPC, who would slowly become masters of their craft, and when you die, the accumulated knowledge survives not through books you've got around, but through other characters who have polished those skills.
After death, you would be able to switch to another character of your faction - and have to deal with their traits and quirks, would probably be pretty fun as well. It would also mean that "succession" can't instantly make you OP again through books, and despite losing less, you would have to invest more than just boring grind into regaining what you lost. Being able to switch between characters during a run, could potentially also be fun.
Furthermore, this would give a good reason to create bases, not by gating certain crafts or speeding tasks up behind NPC factions, but by giving them real, meaningful utility of being capable of much the same things as you, except in the background so you don't have to grind manually for days. Instead of leveling a single survivor up into a walking death machine capable of every craft, you'd be doing what humans have always done naturally: Pooling resources together, and advancing as a "society".
And bases bring me to my third point:

3. Static vs mobile bases.

3.1 Static bases.

And the "vs" here is more to highlight the fact that there is simply no competition. Not only is vehicle building more fleshed out, but also capable of more, with less hassle, and on the move. Even if you wanted to avoid vehicles, there are no static alternatives: Fridges don't work, ovens don't work, there isn't welding rig or UPS furniture, no power grids, convenient liquid storage, or.. anything, really.
I think that the game would be much more fun, if the player had both the ability and reason to "colonize" buildings, both earlier and later on. The ability to drag some freezers, fridges, ovens together, connecting them to a generator, or some other local non-vehicle source of power, would provide a new aspect of the game. Right now, even if you decide to build a base, there is extremely little you can do with it, majority of what you build is just cosmetic, honestly.
Ideally, static constructions would be "modular" like vehicle tiles, like being able to install curtains over metal bars or a door frame or run wiring through walls, or replace an oven's power cord with a wireless replacement or internal generator... possibly even make engines/etc. generate multiple resources, e.g. heat as well as horsepower.
I also think that all objects in the game should follow the same overall durability systems: A combination of static tiles' damage absorption, vehicle parts' HP, and items' durability levels. Like I said, many things that would be a huge PITA to change, at this point.

3.2. Vehicles:

Aside from the mentioned above durability change, IMO, vehicles would be much better off, if they needed transmission axles, wiring, and piping. This way, merging two vehicles through any kind of connector could keep them separate, while also imposing more constraints on vehicle construction, leading to the process being a bit more involved, and the ability to make components interact with each other in a slightly more systemic way - now faucets connect to the tank they're connected to. What happens if you use alcohol for coolant?
But of course, the most important thing with regards to progression is:

4. Crafting

4.1. Success and progress.

One thing I would change is, instead of a sort of... ambiguous mechanic of "You resume your task", I would create temporary "unfinished " items for in-progress crafting, of any kind.
Second, I think that craft success/failure is too binary, and I would replace it with a system, where you are given the stated chance of crafting what you want, and rather than failing at the end, at some point you can get a prompt "You have made a mistake and wasted %nx %material, use another and continue?", so that even at far lower skill levels - as long as you know the concept/theory - you can eventually craft what you want, in a semi-deterministic manner.
Thirdly, whenever you waste, destroy, etc. a component/item, it should fall apart into "breaks into" items, rather than vanishing from existence. A lot of those scraps should be useless, but I am opposed to objects vanishing out of existence on principle, especially when it contributes to a "hoard until you get the maximum use out of your resources" dynamic in terms of crafting.

4.2. Components and item modification

I firmly believe that part of what makes vehicles amazing, is the way you can compose different available components, figure out what you can make with them, and how to achieve it, and gun/clothing modification is also fun, but...
In terms of CDDA: I think that those modifications should also be blueprints, and that there should be more of them, based on a twofold system: Modification capacity, and modification consequences. For example, a coat might have 0/2 lining, 0/4 padding, 0/1 coating slots, and each filled slot results in extra encumbrance based on both the item's suitability for modification and the specific mod you do. You should be able to add thermoelectric lining to items, "coat" it with rain-resistant filament, pad it with both some kevlar and extra pockets, e.g. tailor your own gear yourself. IMO, as many items as possible should be the "basis" for the player to work on, rather than a final end-goal, like the survivor clothing.
Wouldn't it be fun to make your own, custom survivor suit out of the best items you can find, rather than just rush towards some single goal craftable? What if you could add nails to wooden weaponry as a mod, electrify any melee weapon, serrate the blade of your trusty kukri, or coat your arrows in poison?
In terms of a game I'd make: I would make as many items as possible the sum of their parts, rather than a single static object, e.g. give every item a specialized "inventory" for components. Those components would be stuff like spark plugs for engines, stock/sights/etc. for firearms, different types batteries for electronics, CPUs, a battery compartment(to replace it with a corded/ups/etc. one), an accumulator or a betteworse sawblade.. point is, you should be able to juryrig and improvise over broken components, pool items together for parts, and repair of furniture, items, objects, could become a more involved process than "do I have the right tool and material chunk to repair".
A good example would be being able to create a battery cell of several individual sub-cells, e.g. make the first one a remotely rechargeable UPS sub-battery, then two normally rechargeable ones, and finally a plutonium mini-battery, in the case you really need your tool for an emergency.

4.3. Recipes

First off, I think that all types of blueprints should be consolidated, into the same overarching system, so they can make use of features implemented for each other. Also feel free to read the tl;dr of this section first.
Features such as for example, extending qualities from tool qualities only, to component qualities. E.g. not "bone glue or glue or duct tape", but "mquality: adhesive: 1", as well as the ability to define some components as affecting the end result's properties: Weight, durability, how handy it is to use as a tool. Ideally, those qualities would have more than a single value, which would depend on the quality itself. For example, the "fabric" quality would feature encumbrance, durability, protection values.
Some tools might be faster than others, some might impact craft success probability negatively. Ideally, that would be indicated through a relatively simple interface, like (150% speed, 90% success) after the selected tool.
Alas, at this point reworking recipes like this would be... impossible, pretty much. It's something that'd need to be tested from scratch, carefully adjusted, and figured out, to avoid bogging the player down. I am leaning towards having multiple-stage processes like construction, where individual tools/materials affect a specific stage, and the properties of the final object are defined through either a simple domain specific language, e.g. durability="min(mat1.adhesive/3, 1) * mat2.hardness * 10". OR simpler and perhaps better, mqualities could just have a numerical rating indicating how good they are for that purpose(e.g. as a bar, as armor, or as meat), and their contribute either to craft success, craft speed, or whichever property the current craft stage governs.
tl;dr: Perhaps this would need to be sawed down and simplified, but the premise here is: I would like to give the player actual reason to stay on the lookout for better tools, materials, and components, and only part of it as a "checklist" of things to find, with plenty to figure out and improvise on your own. Rather than making a survivor winter coat, why not figure out which animal's fur is the warmest, and line your greatcoat with it? Find and pursue the solution yourself, especially when it means adapting to this strange new world.

5. The environment.

5.1. Dynamic environment

What I would do here, is create the notion of "groups" of zombies, animals, or survivors, which have some very basic AI simulated on the world map, that is only realized into actual herds/lairs/buildings once you're close enough. You should be able to realize that there's been giant bees raiding you recently, and that that means there has to be a new nest nearby, that wolves have wandered close, and probably have a lair, or find migrating ants on the way to establish a new colony. E.g. a combination of "dynamic environment" and "dynamic locations" to raid/cleautilize.

5.2. Procgen improvements

First off, a small one: IMO, loot generation should be switched to first choosing an item or bundle of items, and then allocate it into containers, so that if a gunstore generates a 9mm firearm, it also generates a magazine for it, and a stack or two of 9mm ammo. It could also be used to create "types" of say restaurants, independent from the actual building.
Second off, rather than choosing a random building, IMO, there should be more instances of a part of a building being chosen randomly from a few variants with different layouts.

5.3. Challenge and combat.

Needs to be toned way down in terms of vertical progression, albeit... one way in which lower-level enemies could stay relevant, would be to adopt a HP system like Exanima's, where you can take either "hard" damage(cutting/piercing/hard bashing), or "soft" that regenerates fast-ish on its own(absorbed by armor, glancing blows), so that even if your armor absorbs majority of damage, you still take some.
I think that doing this would make it possible to reduce zombie counts(which are annoying as hell), without sacrificing how dangerous they are.
In fact, I'd even go as far as say have soft/hard/critical damage, with the last being extremely difficult to heal, so that extremely high-end enemies like turrets, rather than killing you, instead cripple you for a while with really tough to heal critical-type damage.
I'm not gonna talk about nerfing vehicles, because I think that the need for that is very self-evident. Unless it's intended that you can roll through anything, anywhere, be it a chicken or a tank drone.

6. tl;dr/Summary

Basically, the outline of my thoughts comes down to shifting the progression from a central measure of how strong your character is, to something both more open-ended, and touching upon more game mechanics than currently, as well as factoring the "inevitable" inheritance of a run into the core gameplay loop, in a way that makes sense in a roguelike context, and adding more depth - even if most of it would be utilized very little - to the crafting of items, bases, vehicles, and other objects. I would like to give the world around the survivor more relevance, and reasons to interact with it.
Currently, the game has incredible amounts of content, but the vast majority of it gives the player no reason to care about it, and what you care about reduces to a very one-dimensional measure of how far along you are - there's just skills, gear, and vehicles, and most of that is defined by which books you have access to. Instead of a "how does this content factor into my options?", you only ask yourself a binary "does it?"... and the answer is usually a no, especially as you get further in the game.
And that, is not only boring, but leads to the issue of power creep: Because there is only a single axis to progress on, to be relevant, content has to make you "stronger", and since all falls on that axis, the stronger you are, the less of the game is relevant to you. At some point once you know what to do, it's just a grind.
And I think that the game could do far better than that, if it focused on how many distinct things surviving entails, especially multiple humans coming together and the continuous process of adapting to the environment and utilizing the new, extradimensional objects and creatures. The world has essentially ended, with all its military might, you're supposed to be surviving in that world, not becoming its new God. And as long as the only goal is to "survive and increase your combat capability", every new addition and change to the game will do nothing towards guiding it towards becoming a better game.
Or, basically, the game needs to stop being about a single central measure of progression. Preparing yourself for the wintecold environments should be separate from preparing yourself for facing robots, which should be separate from surviving zombies, which in turn should be governed by a different metric of progression than maintaining a food supply, preparing for the worst(death of your character), tweaking your gear, and more of the game should be a process of continuous improvement, rather than ticking items off a checklist. Modular content would go a very far way in this respect, imo.
That's what I mean when I say the game has deep flaws that I think are unlikely to be corrected. And I know that my post is incoherent and at times extremely ambitious... I just... find it difficult to collect myself better than that. Please do not be too mean.
And if you have any questions, please ask me, I am confident in my ability to come up with, if not answers, then at least food for thought. I am well capable of coming up with less ambitious proposals than the stuff here, I just... idk, I had to dump the contents of my brain first.
I will do more thought on actual, more modest, change proposals as I continue my current run, and open a few issues, or make another megapost with a collection of the small things mainly.
submitted by derpderp3200 to cataclysmdda [link] [comments]

What is in binary signals?

What is in binary signals?
Let us consider in detail the structure of the signal. In the panel, the trader sees:
Trading asset. A trading asset on which you can open a transaction. Additionally, binary options forex signals have cryptocurrencies;
Price. The price at the time of the signal or the opening of the current candle for the adaptive strategy.
Time. Time since the appearance of the signal. In the adaptive strategy, when the statistics were updated.
Expiration. The expiration time of the option.
Power. It is defined as the number of profitable options in the past with the current combination of indicators.
https://vfxalert.com/en/?partner=8&utm_source=reddit.com

https://preview.redd.it/mwwns519usk41.png?width=1100&format=png&auto=webp&s=d316019d5b57741a5a756a78c96e7b9525194692
#optionstrading #BinaryOptionTrading #binaryoptionstutorial #whatisabinaryoptionsstrategy #howtolearntotradebinaryoptions #LearnHowToTradeBinaryOptions #OptionsTradingForBeginners #traderBinaryOptions #BinaryOptionsTrader
submitted by vfxAlert to u/vfxAlert [link] [comments]

Variability Encryption

Variability Encryption
I created a mostly pseudo code of the encryption side of things, there will be some parts that refer to below because my lack of programming language training/skills and such. The Decryption part will have to come at a different time.

But in here I will discuss the reasons it works. First it is Symmetric Encryption, as the same key will encrypt and decrypt. Decryption notes to come later when I have time.

1) Plain Text/Known Text
A request for me to use a known text format was made, I saw it but have been working 60-70 hours a week and also trying to create the pseudo code. I can tell you that using a known text then modifying it wont work. Here is why: The Combinations will create a ternary base, the juggle and shuffle are designed to increase disorder. If there was one run of each process and the order was shuffle, juggle, combinations then it would be plausible that there could be a pattern which shows up. However the repetitions of the three cycles and the key derived pattern of each process inside the three cycles will create a very severe jumble of 1's and 0's even if our source was entirely 0's or 1's to start with.

2) Block Cipher attacks
The way the entire system works we have no real identifiable blocks to work with. The key is read in a dynamic manner which results in repetition use of the key to start at different portions of the key. Further the combinations phase has limits to string length but those limits allow a lot of variability in length of a specific portion being applied to. Inside the string lengths we further have a lot of variability in the number of combinations and the salts further shake up the variabilities in many manners. There is no block as it where except where the Shuffle occurs (well kinda) and that is not the primary function of the system. In theory it may be necessary to make blocks for the shuffle portion but those blocks will not function in the same manner as existing blocks which will require testing for each and every possible variation.

3) Attempts to use a similar key (or part of a known key)
These attacks of course can happen when there is a man in the middle attack where the key is actually made up of a long-term key between users, a mid-term key, and a short term key such as the Signal App uses. Or however many versions of multiple key portions. So assuming your man in the middle attack got half the key you try to apply methods to use it to get in. The first problem is that key lengths for multiple key portions does not need to have a fixed length. The idea of a fixed key length is laughable considering the methods used in variability encryption. A variable key length of 32 to 64 bits where the accessed portion makes up a fair to sizable portion thereof does not give them enough information to recreate the key size with any reliability without trying all possible sizes. Having a small key is not detrimental to the entire system either, wherein the difficulty increases with key size size admittedly but the variability system can start at 32 bits without issue.

4) Difficulty in Brute Forcing increases with file size
The Jugglee routine definitively increases the problem for decryption. Since it can apply to the whole of the file size at once without too much predicted issue this means that an enemy operator must process the whole file for three of the 9 processes and they have to accurately judge when the juggle process was used each of the three times. The combination stage dramatically increases difficulty as well since it can encode a lot of data in the larger string sizes thus making accurate string length detection a necessity. The shuffle is by far the easiest portion to decrypt where it is in theory but the other stages make accessing it properly very difficult

5) The key has other strengths
Due to the way the key works, where it identifies different string lengths and where it can be added infinitely to itself with a low possibility of exact repetition, this makes the key weaponized on its own. The attacker will require knowing how many repetitions of the key occurred and where the individual key portions were spliced properly to be able to use the information.

6) Statistical Attacks
There is, admittedly, a bit of a possibility of weakness to Statistical Attacks. The weakness however is very low versus the total capabilities of the system. You would need to know a significant portion of the key and have a known text example. Given both of those at the same time you could, in theory, be able to attempt to derive the sequence of the processes with enough effort. However I would say this effort will still be harder by far than AES 512.

7) Brute Force
Combinations make for a large spread you need to test, the juggle makes for tests to be done three times over the whole of the file, the shuffle makes you waste time and energy trying to derive the proper order of things. If you encrypt a megabyte, a not unheard of size (sarcasm), the attacker must account for all possible key lengths, all possible key variations, and because the way the combinations work they must also be able to predict to a fair extent the source of the data, yes the data itself helps create the encrypted results thanks to the combinations system. Thus the processor time would be in far excess of the time it would normally require for every iteration of the possible key and at all possible file sizes using the combinatorics, thus making for far in excess of processor time to crunch the whole code ^3. It becomes factorial in of fact which should definitely scare the crap out of cryptologists.

8)Yes some attacks will always succeed.
You can buy the password, you can beat the password out of someone, you can probably derive the sequences if you can watch the processor requirements, getting into the RAM while it is working will get you far, and so forth. However without full information, say it is an ATM speaking to the bank server and you are in the middle but the encryption code is hardwired, you will get nothing for it.

9) While a one time pad is obviously going to be the strongest the methods involved in Variability Encryption leave no doubt that if right now, right now, the atoms in the sun were made into a Super Computer (all of them) and the power doubled annually, a petabyte would never be successfully cracked before entropy destroys everything. AES 512 cannot say that with those standards, and yes I am bragging but dammit I feel good having found a statistical method that frankly cannot ever be decrypted with brute force unless the file size is small and the key small and the attacker knows that. By small I mean like 8 bits small or some silly small size like that.

10) Patent and Patent Pending. The lawyer tells me I have to include that in my works, I listen to my lawyer. If you want in you can negotiate with me.

11) I think, now I do not have proof as I have been exclusively working on this encryption routine and the other patent applications, that AES can be exploited due to the high storage potential of combinatorics. It will need a lot of my time but it feels right. But again not going down that squirrel hole until I get real code that shows how the system works so I can take it before some very rich individuals or one of the other Patent Applications gets attention as well.


_________________ PSEUDO CODE (Kinda) _____________________

Pseudo Code - Variability Encryption // Variability is key, there is nothing else. This won't be real Pseudo Code but it should suffice for most here. //
Start: Load Key, to be called Key_Card Load File to Encrypt, to be called Step_Zero // Process_1 = Combin_First // // Process_2 = Combin_More // // Process_3 = Juggle_Scramble // // Process_4 = Shuffle_Mixup // // Process_5 = Combin_Restore // // Process_6 = Juggle_Restore // // Process_7 = Shuffle_Restore // // Process_8 = Ternary_Binary // // Process_9 = Binary_Ternary // // The processes are the main methods involved in first encrypting, then in decryption // Hash Key_Card, to be called Hash_Key Process: // Effort is to get a value from 1 to 6 to generate a pattern of the processes above, assume if there is an error another process assigns a value to each via using the key to generate an option that is quasi random // Value of Hash_One = If Hash_Key ends with 7,8,9, or 0 divide by ten and drop decimal, else use last digit. Value of Hash_Two = If the first digit in Hash_Key is a 7,8, or 9 then (look at the next digit, if next digit is a one then look to the next digit) else use digit. Value of Hash_Three = If RoundDown (Divide Hash_Key by 5) = 7,8,9, or 0 then if (RoundDown (Divide Hash_Key by 4) = 7,8,9, or 0 then (RoundDown (Divide Hash_Key by 6)) else use the digit. Order_One uses Process_1, Process_2, Process_3, Process_4. Order_Two uses Process_2, Process_3, Process_4, Order_Three uses Process_2, Process_3, Process_4, Process_8. Process_1 = a, Process_2 = b, Process_3 = c, Process_4 = d, process_8 = e. //The order of each of Order_One, Order_Two, and Order_Three get determined here // Order_One = if Hash_One =1 then acd, else if Hash_One = 2 then adc else if Hash_One = 3 then cda else if Hash_One = 4 then cad else if Hash_One = 5 then dac else if Hash_One =6 then dca. Order_Two = if Hash_One =1 then bcd, else if Hash_One = 2 then bdc else if Hash_One = 3 then cdb else if Hash_One = 4 then cbd else if Hash_One = 5 then dbc else if Hash_One =6 then dcb. Order_Three = if Hash_One =1 then bcde else if Hash_One = 2 then bdce else if Hash_One = 3 then cdbe else if Hash_One = 4 then cbde else if Hash_One = 5 then dbce else if Hash_One =6 then dcbe. // Note that this makes the order of the processes per each of 3 distinct rotations difficult to predict and allows for an initial change into ternary during the first combinatorial phase and returns to binary at the end of the process // Create file: Process_Run Process_Run = Step_Zero { // Process_1 // Load Process_Run Load Key_Card //The basis for the keycard is simple, we identify how many bits we are going to use for the string length, then we use that to identify the possible length of the combinations portion of the key afterwards, we then see if there is going to be a salt and if there is a salt we read the next 3 bits. // Load last 3 bits of Hash_Key, find the Decimal +1 and save as Hash_Ke1 // This will result in a 1 to 8 value // // Declaring a few things that will be used but will be modified in the following processes // Str_Len = 0 Key_Run = Key_Card Salt_True = 0 Com_Pare = 0 Com_Cnt = 0 Str_Cnt = 0 Chk_Salt = 0 // Replacement_File.txt // // Replacement_File.txt will be a separate post for people, it will be a large file which will have a replacement table based upon Combinatorics in it. It will be designed upon a variety of sizes but it will not have a full and entire table, it should be sufficient for the purposes of people here to understand how it works however // { If Hash_Ke1 >0 & <3 then Str\_Len = 4, if Hash\_Ke1 > 2 & < 6 then Str_Len = 5, if Hash_Ke1 > 5 & < 9 then Str_Len = 8 } { If Key_Run does not have sufficient length then Key_Run = Key_Run + Key_Card Remove Str_Len bits from Key_Run and identify the Decimal + 1 value of these bits. This will be called Str_Cnt Com_Cnt = RoundDown (Log( Str_Cnt / 2) / Log (2)) If Com_Cnt < 4 then Com_Cnt = 4 //Next step helps analyzes Com_Cnt to see if it is small enough, reduces the length if it is not // { While //I am creating a repeating sequence that repeats until the if then is true // Load Decimal + 1 of Com_Cnt bits from Key_Run, value is Com_Pare If Com_Pare > RoundDown (Str_Cnt / 2) then Com_Cnt = Com_Cnt -1 else End } Remove Com_Cnt bits from Key_Run // The purpose of this code above is to get the decimal of the first portion of our string length bits and to get a decimal amount for our combinations count which will be half that, or less, of the decimal for the string length. // Chk_Salt = Remove 1 bit from Key_Run If Chk_Salt = 1 then remove 3 bits from Key_Run, these three bits become Salt_True Using Com_Pare identify Replacement_File.txt table section for the Ternary Replacement. Remove the identified bits from Process_Run as identified by the table inside Replacement_File.txt in match to the corresponding binary. Call the result Out_Put1 // This is using the table to identify a length section appropriate for the replacement then identifying the string section inside that would match our source which will then indicate what to replace it with // If Chk_Salt = 1 then ************* SALTS NEED TO GO HERE *********** // Some salts occur before the next process, some would after the next process. I am going to make a separate post about the salts // Fill Empty Spots in Out_Put1 by using appropriate length of Process_Run }
{ // Process_2 is very similar to Process_1, main differences will be it is already running in Ternary. // Load Process_Run Load Key_Card
Load first 5 bits of Hash_Key, find the Decimal +1 and save as Hash_Ke2
Str_Len = 0 Key_Run = Key_Card Salt_True = 0 Com_Pare = 0 Com_Cnt = 0 Str_Cnt = 0 Chk_Salt = 0 // Replacement_File.txt //
{ If Hash_Ke2 > 0 & < 4 then Str_Len = 4, If Hash_Ke2 > 3 & < 9 then Str_Len = 5, If Hash_Ke2 > 8 & < 14 then Str_Len = 6, If Hash_Ke2 > 13 & < 19 then Str_Len = 7, If Hash_Ke2 > 18 & < 24 then Str_Len = 8, If Hash_Ke2 > 23 & < 29 then Str_Len = 9, If Hash_Ke2 > 28 & < 32 then Str_Len = 10. //Longer possible string lengths in follow up repetitions, increases difficulty in statistical analysis and brute forcing significantly. // } { If Key_Run does not have sufficient length then Key_Run = Key_Run + Key_Card Remove Str_Len bits from Key_Run and identify the Decimal + 1 value of these bits. This will be called Str_Cnt Com_Cnt = RoundDown (Log( Str_Cnt / 2) / Log (2)) If Com_Cnt < 4 then Com_Cnt = 4
{ While
Load Decimal + 1 of Com_Cnt bits from Key_Run, value is Com_Pare If Com_Pare > RoundDown (Str_Cnt / 2) then Com_Cnt = Com_Cnt -1 else End } Remove Com_Cnt bits from Key_Run
Chk_Salt = Remove 1 bit from Key_Run If Chk_Salt = 1 then remove 3 bits from Key_Run, these three bits become Salt_True Using Com_Pare identify Replacement_File.txt table section for the Ternary Replacement. Remove the identified bits from Process_Run as identified by the table inside Replacement_File.txt in match to the corresponding binary. Call the result Out_Put1
If Chk_Salt = 1 then ************* SALTS NEED TO GO HERE *********** // Some salts occur before the next process, some would after the next process. I am going to make a separate post about the salts // Fill Empty Spots in Out_Put1 by using appropriate length of Process_Run } { // Process_3 // // The Juggle Routine increases the net cost for brute force attempts to total processor time * 2^n where n is the number of bits in the entire file to be encrypted. This is per cycle involved and if they get the order of processes correct. // Hash_Mark = Hash of Key_Card Len_Mark = Length of Hash_Mark divided by 2 rounded down Hash_Mark = Hash_Mark - Len_Mark Sort_Hash = Last 3 bits of Hash_Mark Done_Hash = Decimal +1 of Sort_Hash Hash_Mark = Hash_Mark minus Sort_Hash Trig_Cnt = Last three bits of Hash_Mark Jug_Start = 0 Tri_Dec = Decimal + 1 of Trig_Cnt Trig_1 = 0 Trig_2 = 0 Trig_3 = 0 Trig_4 = 0 Trig_5 = 0 Trig_6 = 0 Trig_7 = 0 Trig_8 = 0 Trig_? = 0 // see lower notes //
{ If Process_Run is Ternary then run sub_prss2, else run sub_prss2 If Done_Hash < 3 then Done_Hash = 3 // Examines to see if the system is in Ternary, should be obvious // } { // sub_prss1 // // Trig_Dec and Done_Hash are the main functions to determine length and number of triggers. // Load Process_Run Trig_? = ? // The above needs to be incremental in growth for Trig_1 to Trig 8, or make some sort of array? // { While // Trigger making //
Trig_Dec> 0 ; Read first three bits of key, if Trig_? = 000 then 00, if Trig_? = 001 then 01, if Trig_? = 010 then 10, if Trig_? = 100 then 02, if Trig_? = 110 then 20, if Trig_? = 101 then 12, if Trig_? = 011 then 21, if Trig_? = 111 then 11 // Note this is the extremely simple version // } { While Process_Run still has bits repeat sequence Remove Done_Hash trits from Process_Run, These are First_Trig Read First_Trig for first match to Trig_? values, when match then remove remainder after match to Sec_Trig. Read Sec_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Thrd_Trig Read Thrd_Trig for first match to Trig_? values, when match then remove remainder after match to Frth_Trig. Read Frth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Fith_Trig Read Fith_Trig for first match to Trig_? values, when match then remove remainder after match to Sxth_Trig. Read Sxth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Svth_Trig Read Svth_Trig for first match to Trig_? values, when match then remove remainder after match to Egth_Trig. // Whatever remains will go only into the 8th set in this version // End While when Process_Run is empty } { Process_Run = Reverse order of data for Sec_Trig, Frth_Trig, Sxth_Trg, Egth_Trig } }
{ // sub_prss2 // Key_Fun = Key_Card Trig_? = ? Load Process_Run { While Trig_Dec> 0 ; Remove Three bits from Key_Fun, becomes Trig_? //incremental increase function // // Design may use all strings as keys if odds happen correct with Binary, this is an extremely simple version // } { While Process_Run still has bits repeat sequence Remove Done_Hash bits from Process_Run, These are First_Trig Read First_Trig for first match to Trig_? values, when match then remove remainder after match to Sec_Trig. Read Sec_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Thrd_Trig Read Thrd_Trig for first match to Trig_? values, when match then remove remainder after match to Frth_Trig. Read Frth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Fith_Trig Read Fith_Trig for first match to Trig_? values, when match then remove remainder after match to Sxth_Trig. Read Sxth_Trig in reverse to find match for Trig_? values, when match then remove remainder of after match to Svth_Trig Read Svth_Trig for first match to Trig_? values, when match then remove remainder after match to Egth_Trig. // Whatever remains will go only into the 8th set in this version // End While when Process_Run is empty } { Process_Run = Reverse order of data for Sec_Trig, Frth_Trig, Sxth_Trg, Egth_Trig } } { Process_4 // Shuffle Process // // Shuffle is designed to just swap a key derived sized length of trits or bits //
Read Key_Card first three bits. This becomes Shfl_Len Shfl_Dec = Decimal + 1 of Shfl_Len If Shfl_Dec < 2 then Shfl_Dec = 2 // Above is how we decide what length of blocks are being replaced. // Totl_Left = Shfl_Dec Bin_Bin = 0 Key_Shfl = Key_Card { If Totl_Left > 2 then Bin_Bin = Log(Totl_Left) / Log(2) // As if using the log function in MS Word // Key_Req = Remove Bin_Bin bits from Key_Shfl Shfl_? = Decimal of Key_Req
// Problem Defining this. I will make a separate post showing how this would look but not how it would work in Pseudo Code // } }
// Decryption to come when I have the time, hopefully it is obvious to some //
___________________________________________SALTS LIST____________________________
SALTS
During the combinatorics phase there can be additional methods, known as salts, to add to the source which will confound an attempt to break the encryption. These salts can be modified to use binary or ternary as needed.

The salts are:
Salt 1: add combination(s) at
This salt will be triggered by a 000 in the key, the next two to four bits of the key will determine where in the current set the combination will be fake whereas the length of the combination string determines the required bits.

Salt 2: ended combination, start new combination early
This salt will be triggered automatically when possible, it will not allow a previous combination to break size rules, aka 4 minimum string length and a maximum of 50% combinations inside the string length. This increases security to prevent detecting this salt. This salt will not be used if there is a marker for another salt (This one is disabled in the example as I am only human).

Salt 3: Simulate multiple smaller combinations
This salt will be triggered by a 001 in the key, if the combination is under 8 then it defaults to NO SALT, else it defaults to two distinct combination strings where you divide by 2, round down for the first, then remainder for 2nd to obtain the string sizes. Possible alternative includes using a marker in the key to allow for more divisions provided the string length is long enough.

Salt 4: Skip Combination entirely
This salt will be triggered by a 010 in the key. The size will be determined by the previous combination string lengths, if under 8 then it will be 10 string length, if the previous was over 8 string length it will be 6 string length. Also it is possible to use a math formula to do a size variable, or a hash value, or.. Similar to 8 except we still use the full length of the listed string.

Salt 5: Skip real combinations, insert fake combination
This salt will be triggered by a 100 in the key. The size will be determined by the previous combination string lengths, if under 8 then it will be 10 string length, if the previous was over 8 string length it will be 6 string length.

Salt 6: Can use 2 dimensions
This salt will be triggered by a 011 in the key. This results in a combination going down instead of left to right. This is a complexity issue, I've plans for up/down but is the encryption community ready for this complexity? This key can be avoided if the complexity is too much, it is unlikely that by making blocks for this function that any existing block attacks could find vulnerabilities to exploit.

Salt 7: This will invert the binary values in the next combination
This salt will be triggered by a 101 in the key.

Salt 8: In between fixed length combinations, where the leading combination string ends with a combination location, and the next string starts with a combination location you can put a complete blank string. This is contrary to 4 where we are using the assigned string length in full instead of a variable. This salt will be triggered by a 110 in the key.

Salt 9: This salt, if we are using ternary, alters which of the 0, 1, 2 values is being used to encode the combinations and what is doing in binary. This is a permanent flip or a temporary flip as desired or as built into function. This salt will be triggered by a 111 in the key.
________________________________________________Example Tables (paste from Excel)_______________________________


https://preview.redd.it/fjtp9f4na5931.png?width=677&format=png&auto=webp&s=51bdfe2144d8c04cfc3c9dea42ef6b554344d9bd
submitted by PHDEinstein007 to encryption [link] [comments]

[Beyond 3.0] Server Downtime for ~9 hours | Frontier Livestream with devs at 2 PM GMT

Update is LIVE!

Beyond 3.0 Launch Day Livestream

Beyond 3.0 CGI Trailer - Commander Chronicles: The Deal

Will Flanagan on the forums:
Hi everyone,
Soon you'll be able to strap yourself into the cockpit of the Chieftain and explore a host of new gameplay features - Chapter One is almost here!
The galaxy servers will be down today from 9.30 AM (GMT) for approximately 9 hours. There is a chance that this could run over, so we really appreciate your patience. As usual with these updates we'll keep you as up to date as possible.
While you wait, join us for a pre-launch livestream at 2:00 PM (GMT) for a Beyond - Chapter One recap with members of the development team on YouTube here.
We'll post the changelog soon, and add update information to the thread as it comes in.
Keep an eye on our social media channels too for regular updates on the server status. We will try to respond to as many queries as possible, but we will give everyone advanced notice for when the servers come back online, and social media is your best bet for knowing when the servers are back up and running!
Thanks!

Elite Dangerous: Beyond - The Features of 3.0

(Includes improvements Coming Soon for Crime and Ship Destruction, Kill Warrant Scanner, Superpower Bounties)
Edward Lewis:
Here's an overview of all the features coming to Elite Dangerous: Beyond - Chapter One. With one or two exceptions (outlined within the section) the information below is how the feature will work at launch of Elite Dangerous 3.0.

Patch Notes

New features for 3.0

Crime
Missions
Ships
Trade Data & Galaxy Map
Engineers
Weapons and modules
Galnet Audio
Installation and Megaship Interactions
Surface material system
Quality of Life
Misc Features
Consoles

Fixes and Improvements

This update includes well over 1000 fixes for various issues that have been discovered and investigated during the development process since the release of 2.4. For the sake of clarity, we have primarily listed below fixes for issues that have been reported to us by the community or other important changes.
Art
Audio
Camera Suite
Consoles
Controls & Control Devices
Engineers
Galaxy Map/System Map
General Fixes & Tweaks
Holo-Me Creator
Hyperspace/Supercruise
Installations/POIs/USSs
Launcher (PC Only)
Missions
Multi-Crew
NPCs
Outfitting
Player Journal
Powerplay
Render
Ships & SRV
General Ship Fixes and Improvements
Anaconda
Asp Explorer
Cobra Mk. III
Diamondback Explorer
F63 Condor
Federal Gunship
Imperial Clipper
Imperial Courier
Imperial Cutter
Orca
Python
SRV
Type 7
Type 9 Heavy
Type 10 Defender
Viper Mk4
Stability Fixes
Starports/Outposts/Surface Ports
Synthesis
User Interface
VR
Weapons & Modules
Wings
submitted by ChristianM to EliteDangerous [link] [comments]

Best Binary Options Indicator  Ultimate Trend Signals Price Action Binary Options Signals That Work - YouTube Binary Option Winner Indicator Signal For Iq Option Live ... Binary Option - YouTube MAGIC INDICATORS - NEVER LOSE in options trading - TRY TO ... Binary indicator - YouTube Binary Options 60 Seconds Indicator 99% Winning Live ... Binary options fractal strategy, trading system indicators robot Signal

Using this binary options indicator allows to define in advance best terms for trading. It gives most accurate signals to do deals with minimizing risks. Uses PriceAction. For market prognosis it the price that has more significance, not its history. Therefore, the indicator works on the basis of PriceAction. Informs in advance about % success. The market is changing. The indicator analyses it ... Binary options. trading signal services and binary option robots have a potential to turn an average trader into a great one. Finding a good signal service will help you to ensure your success as a trader. Once you sign up to at least one of our trusted signal provider, you will save huge amount of time from researching and analyzing market data and you can focus solely on making profit. By ... The All CCI binary indicator tracks the Commodity Channel Index across a wide range of time frame’s and displays the CCI trends in mini windows below the main chart. Buy CALL option for CCI values above the zero line (bullish trend). Buy PUT option for CCI values below the zero line (bearish trend). Use in conjunction with other indicators to define CALL/PUT entries and expiration time ... Mt4 Binary Options Signal Indicator Define Get link; Facebook; Twitter; Pinterest; Email; Other Apps; May 31, 2017 Opções binárias Indicators. Videos e tutoriais sobre cerca de alguns dos mais utilizados indicadores de opções binárias e software de análise técnica. Aprenda como usar indicadores de negociação que irão ajudá-lo a aprofundar a sua compreensão dos mercados financeiros ... I came across a forex indicator which was BB Arrow Signal, which was one of the best indicator that produces 90% accurate signals for any Currency Pairs. I thought to give it a try but unfortunately it was just available for MT4 not for MT5. Bad luck for me because my Broker runs on MT5 platform only. So, I thought of to create my own indicator but I couldn't code because I'm not a Programmer ... The binary options trading strategy based on the Arrow_Signal.ex4 indicator is a binary trading strategy which aims to define trading opportunities based on direction. This strategy displays arrows on the chart where the trader can buy or sell. However, this strategy must be used according to the rules spelt out below to ensure profitability. Binary-Signal.com does not accept any liability for loss or damage as a result of reliance on the information contained within this website; this includes education material, price quotes and charts, and analysis. Please be aware of the risks associated with trading the financial markets; never invest more money than you can risk losing. The risks involved in trading binary options may not be ... That is not the case for the work with binary options taking place primarily on short timeframes. When the signal defines the entry point, the modification of indicators affects the result most directly. There are two ways to define whether the indicator for binary option has redraw. Binary option trading on margin involves high risk, and is not suitable for all investors. As a leveraged product losses are able to exceed initial deposits and capital is at risk. Before deciding to trade binary options or any other financial instrument you should carefully consider your investment objectives, level of experience, and risk appetite. BBand Stop binary option strategy. BBand Stop Strategy is a 5 minute binary option trade strategy which uses BBand Stop alert indicator in MT4 to define ideal position to enter the trade. This indicator is used along with the Bollinger Bands. How to setup the chart Timeframe: M5 Template: BBand Stop Strategy (Download here: eDisk or UlozTo.Net) How does this strategy work Arrows (pointing up ...

[index] [1642] [645] [7359] [25795] [18099] [23634] [9938] [16987] [131] [4988]

Best Binary Options Indicator Ultimate Trend Signals

Hi Friends I will Show This Video Binary Options 60 Seconds Indicator Signal 99% Winning Live Trading Proof -----... Hi, guys Hi guys and welcome to NerdsHD, In today's Video Binary Options Indicator Ultimate Trend Signals. We will do a short review about Ultimate Trend Signals and how you can use it to increase ... I Will Show In This Video Binary Option Winner Indicator Signal For Iq Option Live Trading _____ Join Telegram : http:... FOREX & BINARY SIGNALS http://nextwavetrading.com/SIGNALS/forex&binary OPEN YOUR ACCOUNT IQ OPTION HERE: http://nextwavetrading.com/IQoption IQ OPTION FREE D... Binary options fractal strategy, trading system indicators robot Signal iq option strategy, iq option, IQ Option, iq option tutorial, Live Real Account Binary Option Trading, real account trading ... One minute strategy to trade price action trading signals. Watch how I use simple trading rules to increase win rate. Get 10x Trading System: https://trading... How To Set Perfect Moving Average Crossover Trading Strategy With MT4 And Live Trading ... Best 90% Winning Guaranteed Profit indicator signal for binary option live trading by Smart Tamil Tech ... get trading bots contact with telegram https://bit.ly/3aR8baT get pro or free signals https://bit.ly/2N5PLrp get strategy trading, visit my twitter https://b... Best Binomo - Binary option - MT4 Indicator // Best Signal Software // (FREE DOWNLOAD) by POWER OF TRADING. 12:47. How to always win in binary trading // Best binary option trading strategy by ... MAGIC INDICATORS - NEVER LOSE in options trading - TRY TO BELIEVE GET FREE SIGNAL HERE https://goo.gl/XgsUgZ Find Out Top Post (pinned post) and Visit SIGNAL...

https://binary-optiontrade.baislogkendsandfern.ga