^{1}, Joshua Feinberg

^{2}and Shmuel Fishman

^{3}

### Abstract

We apply a probabilistic approach to study the computational complexity of analog computers which solve linear programming problems. We numerically analyze various ensembles of linear programming problems and obtain, for each of these ensembles, the probability distribution functions of certain quantities which measure the computational complexity, known as the convergence rate, the barrier and the computation time. We find that in the limit of very large problems these probability distributions are universal scaling functions. In other words, the probability distribution function for each of these three quantities becomes, in the limit of large problem size, a function of a single scaling variable, which is a certain composition of the quantity in question and the size of the system. Moreover, various ensembles studied seem to lead essentially to the same scaling functions, which depend only on the variance of the ensemble. These results extend analytical and numerical results obtained recently for the Gaussian ensemble, and support the conjecture that these scaling functions are universal.

Digital computers are part of our cultural environment. The exploration of the difficulty of various problems, solved on digital computers, is a central issue in Computer Science. Usually in these studies the worst scenario is assumed. In the present work a model for an analog computer, where the computation performed by a physical system is studied, in the framework of the theory of dynamical systems, is proposed. The linear programming problem is studied as an example. Distribution functions of quantities that characterize the computational difficulty for an ensemble of scenarios, rather than the worst case one, are computed. In the asymptotic limit of very large problems, each of these probability distribution functions reduces to a universal scaling function, depending on a single scaling variable and independent of the details of its parent probability ensemble. These functions are reminiscent of the scaling functions familiar in the theory of thermodynamic phase transitions.

It is a great pleasure to thank our colleagues Asa Ben-Hur and Hava Siegelmann for very useful advice and discussions. This research was supported in part by the Shlomo Kaplansky Academic Chair, by the Technion-Haifa University Collaboration Fund, by the U.S.-Israel Binational Science Foundation (BSF), by the Israeli Science Foundation (ISF), and by the Minerva Center of Nonlinear Physics of Complex Systems.

I. INTRODUCTION

II. A DYNAMICAL FLOW FOR LINEAR PROGRAMMING

III. PROBABILITY DISTRIBUTIONS AND SCALING

IV. NON-GAUSSIAN DISTRIBUTIONS

A. The bimodal distribution

B. The diluted bimodal distribution

C. The uniform distribution

V. UNIVERSALITY

VI. SUMMARY AND DISCUSSION

### Key Topics

- Cumulative distribution functions
- 25.0
- Probability theory
- 15.0
- Computational complexity
- 10.0
- Computer systems
- 7.0
- Vector fields
- 7.0

## Figures

is plotted as a function of , for the bimodal distribution (for ). The number of instances used in the simulation is 39 732 for the case, 46 583 for the case and 47 169 for the case. The number of converging instances for each case is 5000.

is plotted as a function of , for the bimodal distribution (for ). The number of instances used in the simulation is 39 732 for the case, 46 583 for the case and 47 169 for the case. The number of converging instances for each case is 5000.

is plotted as a function of for the bimodal distribution for the instances of Fig. 1 .

is plotted as a function of for the instances of Fig. 1 .

is plotted as a function of for the instances of Fig. 1 .

is plotted as a function of for the instances of Fig. 1 .

is plotted as a function of for the instances of Fig. 1 .

is plotted as a function of for the diluted bimodal distribution, where . As before, . The number of instances used in the simulation is 54 951 for the case, 41 107 for the case and 50 863 for the case. The number of converging instances for each case is 5000.

is plotted as a function of for the diluted bimodal distribution, where . As before, . The number of instances used in the simulation is 54 951 for the case, 41 107 for the case and 50 863 for the case. The number of converging instances for each case is 5000.

is plotted as a function of for the diluted bimodal distribution, where . As before, . The number of instances used in the simulation is 54 620 for the case, 37 697 for the case, and 65 367 for the case. The number of converging instances for these cases were, respectively, 4980, 3725, and 5921.

is plotted as a function of for the diluted bimodal distribution, where . As before, . The number of instances used in the simulation is 54 620 for the case, 37 697 for the case, and 65 367 for the case. The number of converging instances for these cases were, respectively, 4980, 3725, and 5921.

is plotted as a function of for the uniform distribution. As before, . The number of instances used in the simulation is 121 939 for the case, 91 977 for the case and 112 206 for the case. The number of converging instances for each case is 20 000.

is plotted as a function of for the uniform distribution. As before, . The number of instances used in the simulation is 121 939 for the case, 91 977 for the case and 112 206 for the case. The number of converging instances for each case is 20 000.

as a function of (for and ). The graphs are scaled to fit the theoretical Gaussian result by appropriate choice of the factors .

as a function of (for and ). The graphs are scaled to fit the theoretical Gaussian result by appropriate choice of the factors .

as a function of for all the distributions checked (for and ), where the scale factors were found by least squares fit to the distribution for the Gaussian ensemble, which was found numerically as well.

as a function of for all the distributions checked (for and ), where the scale factors were found by least squares fit to the distribution for the Gaussian ensemble, which was found numerically as well.

as a function of for all the distributions checked (for and ). The scale factors were found by least squares fit to the distribution for the Gaussian ensemble, which was found numerically as well.

as a function of for all the distributions checked (for and ). The scale factors were found by least squares fit to the distribution for the Gaussian ensemble, which was found numerically as well.

Article metrics loading...

Full text loading...

Commenting has been disabled for this content