For A-optimality, by virtue of Cramér–Rao bound, the trace of the inverse of Information matrix for the parameters serves as a lower bound for the sum of variances of the estimators and the bound is attained asymptotically. Hence, asymptotically, A-optimality is achieved by minimizing the trace of the inverse of the Information matrix. For non-linear models, Cramér–Rao bound is crude for finite samples and hence the asymptotic solution can be very different from the design that minimizes the sum of variances. We explore the validity of the asymptotic solution by directly minimizing the sum of variances using numerical methods in a restricted search space. We demonstrate that even in a very restrictive search space of point symmetric designs, the theoretical solution is half as efficient for a sample size of 100. Further improvement can be achieved by relaxing the restriction of the solution being point symmetric. The solution to A and D optimal designs for the logistic model depend on the unknown parameters of the model. Therefore, to obtain an optimal design the experimenter must inform the design based on some prior knowledge, or a guess, of the unknown parameters. This is a severe limitation on the ability to identify an optimal design especially when there is little prior information to inform the guess. Here we explore the use of a two-stage A-optimal design for finite samples and three-stage D-optimal design for large samples to mitigate the loss in efficiency which may arise due to poor guess values. We demonstrate that while two-stage finite sample model results in gain in efficiency with small sample sizes at 70% allocation to the first stage. The three-stage D optimal design is shown to be almost always better than the single stage and the corresponding two-stage design.