Problem Description
Usage:
Call the program from MATLAB, with the following syntax:
toyexample
Usage:
Call the program from MATLAB, with the following syntax:
toyexample2
| Inputs: | ||
| N | - | number of samples each iteration |
| rho | - | fraction of best performing samples to take |
| Outputs: | ||
| sigma | - | A vector of all of the std. deviations |
Problem Description
Usage:
Call the program from MATLAB, with the following syntax:
[ x , y ] = bayes( N , rho , alpha , sigma , y )
Example: [ reimage , oldimage ] = bayes( 500 , 0.04 , 0.7 , 0.1 , image )
In this example, image is a matrix, consisting of two unique gray levels (eg. 0's and 1's), and the noise to be added is distributed Normally, with a standard deviation of 0.1.
| Inputs: | ||
| N | - | number of samples each iteration |
| rho | - | fraction of best performing samples to take |
| alpha | - | smoothing paramter |
| sigma | - | std. deviation for image noise (optional) |
| y | - | image data (optional) |
| Outputs: | ||
| x | - | reconstructed image |
| y | - | original image |
| Inputs: | ||
| N | - | number of samples each iteration |
| rho | - | fraction of best performing samples to take |
| alpha | - | smoothing paramter |
| sigma | - | std. deviation for image noise |
| y | - | image data |
| Outputs: | ||
| x | - | reconstructed image |
| y | - | original image |
| v | - | probabilities at the end |
Problem Description
| Inputs: | ||
| N | - | Number of samples to generate each round |
| rho | - | fraction of best samples to take |
| alpha | - | smoothing parameter |
| traj | - | 0, node transitions, x(1)=1 |
| 1, node transitions, x(1) random | ||
| 2, node placement (default) | ||
| n | - | number of jobs |
| m | - | number of machines |
| t | - | t(i,j) is the cost of job i on machine j |
| z | - | number of succesive rho-th quantiles the same before stop |
| Outputs: | ||
| pi | - | the output permutation |
| t | - | the cost (in time) matrix used |
| Inputs: | ||
| N | - | Number of samples to generate each round |
| rho | - | fraction of best samples to take |
| alpha | - | smoothing parameter |
| traj | - | 0, alias technique (default) uses x% = 70% |
| 1, composition method | ||
| n | - | number of jobs |
| m | - | number of machines |
| t | - | t(i,j) is the cost of job i on machine j |
| z | - | number of succesive rho-th quantiles the same before stop |
| Outputs: | ||
| pi | - | the output permutation |
| t | - | the cost (in time) matrix used |
Problem Description
| Inputs: | ||
| N | - | Number of samples to generate each round |
| rho | - | fraction of best samples to take |
| alpha | - | smoothing parameter |
| A | - | A(i,j) is the distance between node i and node j |
| traj | - | 0, node placements |
| 1, node transitions | ||
| Outputs: | ||
| pi | - | the best tour found |
Problem Description
| Inputs: | ||
| Ne | - | Number of elite samples to use |
| Nmin | - | Minimum number of samples to use (must be >= Ne) |
| Nmax | - | Maximum number of samples to use (must be >= Ne) |
| alpha | - | Smoothing Parameter |
| d | - | number of S_t^* the same in a row with no gamma_t_hat improvement |
| c | - | number of N_t = Nmax in a row |
| n | - | n queens on an nxn board |
| Outputs: | ||
| B | - | An n x n matrix with queens denoted by 1s, and blank |
| squares denoted by 0s |
| Inputs: | ||
| U | - | An 8 x 8 x k matrix of exact solutions |
| Outputs: | ||
| v | - | A vector of length k, labelling the solutions found |
Problem Description
| Inputs: | ||
| N | - | Number of samples to generate each iteration |
| g | - | Number of these samples to use to update parameters |
| alpha | - | Smoothing parameter |
| k | - | Number of cluster means to find |
| data | - | The data we are trying to fit means to (Should be n x d, |
| where there are n points, and d dimensions) | ||
| modif | - | If 1, use modified smoothing, |
| otherwise use standard smoothing | ||
| drplot | - | If 1, draws the cluster means |
| and the data (for 2-dimensions) | ||
| c | - | Optional starting centroids |
| sigma_0 | - | Optional starting standard deviation |
| Outputs: | ||
| mu | - | The centroids found via the CE method, using Normal updating |
| with the parameter set | ||
| count | - | The number of iterations taken |
| score | - | The final score of these centroids |
| Inputs: | ||
| N | - | Number of samples to generate each iteration |
| rho | - | The fraction of samples used to update the probabilities |
| alpha | - | Smoothing parameter |
| k | - | Number of clusters to assign points to |
| data | - | The data we are trying to assign to clusters (Should be n x d, |
| where there are n points, and d dimensions) | ||
| Outputs: | ||
| x | - | The best found assignment of the data points |
| Inputs: | ||
| N | - | Number of samples to generate each iteration |
| rho | - | The fraction of samples used to update the probabilities |
| alpha | - | Smoothing parameter |
| k | - | Number of clusters to assign points to |
| data | - | The data we are trying to assign to clusters (Should be n x d, |
| where there are n points, and d dimensions) | ||
| Outputs: | ||
| x | - | The best found assignment of the data points |
| Inputs: | ||
| c | - | A set of cluster means |
| data | - | The data we are trying to assign to clusters (Should be n x d, |
| where there are n points, and d dimensions) | ||
| k | - | Number of clusters to assign points to |
| Outputs: | ||
| x | - | The assignment of the data points to clusters |
| Inputs: | ||
| x | - | A labelling of data points |
| data | - | The data we are trying to fit means to (Should be n x d, |
| where there are n points, and d dimensions) | ||
| k | - | Number of cluster means |
| Outputs: | ||
| x | - | The cluster means calculated for this labelling of data points |
Problem Description
| Inputs: | ||
| N | - | Number of samples to generate each iteration |
| rho | - | Fraction of samples to use to update the choices |
| alpha | - | Smoothing parameter |
| M | - | A matrix of 1s and 0s, representing a maze |
| start | - | Starting position |
| finish | - | Ending position |
| Outputs: | ||
| pi | - | Output choice set |