control system
The sole objective of control system is to generate feasible inputs to the plant (e.g. dynamic systems) such that it will operate as it intended to under a wide range of operating conditions.
Examples of control systems: Cruise control, auto pilot, rice cooker.
For a general finite dimensional dynamic system in its ODE form,
$\dot{x}$  $=f(x(t),t)+g(x(t),u(t),t),$  (1)  
$y$  $=h(x(t),t),$ 
where $x\in {\mathbb{R}}^{n}$ is the state, $y\in {\mathbb{R}}^{l}$ is the output and $u\in {\mathbb{R}}^{m}$ is the control input of the system. In the control literature, equation 1 is general referred as the plant, where the function $f:{\mathbb{R}}^{n}\times \mathbb{R}\to {\mathbb{R}}^{n}$ governs the system dynamics, $g:{\mathbb{R}}^{m}\times {\mathbb{R}}^{n}\times \mathbb{R}\to {\mathbb{R}}^{n}$ determines how the input (control signals) influence the state $x$ via actuators (e.g. gas turbine) and $h:{\mathbb{R}}^{n}\times \mathbb{R}\to {\mathbb{R}}^{l}$ determines how the state generates the output signal. If $m$ is equal to $n$, the plant is fully actuated. If $$ then the plant is under actuated and otherwise the plant is over actuated. For a plant that is not explicitly dependent in time $t$, such system is called a Autonomous system. The main differece between a control system and a general dynamic system is the additional signal $u(t)$.
For example, to control an airplane, the control system has to control the thrust, flaps, aileron and rudder, which they are the control signals of the system $u$. Those control input influence the system state $x$ such as speed (with thrust), attitude and orientation (with flaps, aileron and rudder). To physically alter the state of the airplane, actuators such as gas turbines are needed, which are controlled by the control signals $u$.
The control signal $u$ can be generated in a closedloop fashion or openloop fashion. An openloop control system generates $u$ with the user, or operator supplied reference state ${x}_{d}$ or output ${y}_{d}$ only; meanwhile closedloop control system uses both reference and feedback signals that are usually measured from sensors. In the airplane attitude control example, the desired attitude is usually represented in rollpitchyaw angle representation, and these signals are measurable by attaching sensors to the flaps, aileron and rudder. In engineering practice, only closedloop control systems should be used, since openloop systems are not robust against uncertainties, modeling errors and measurement errors.
If a closedloop control system is based on state feedback, such contol system is called a statefeedback control system. By the same token, a outputfeedback control system is based on output feedback only. Notice that output signals are available for feedback by definition, however in reality not all the states are mesurable. If a statefeedback control system with all the states available for feedback, it is called a fullstate feedback system and otherwise is call partialstate feedback system, which usually requires a state observer (e.g. Kalman filter) to estimate the unavailable states.
To illustrate the simple concept of control systems, we will use a simple example. A truck driver is required to travel 1000 Km in 10 hours. To relive the stress on the driver’s heel, he has placed a stick to the gas paddle so the car travels at $\mathrm{\hspace{0.17em}100}\mathrm{Km}/\mathrm{h}$. Under perfect conditions, the driver will reach the destination in the allocated time. However, a certain section of the road is uphill, so the truck slowed down by a considerable amount and will not arrive it’s destination in time. To remedy this problem, the driver ’implemented’ a simple solution using the speedometer such that the gas paddle position ${p}_{set}$ of the truck is now depends on the current speed ${v}_{current}$ of the truck, ${p}_{set}=K({v}_{current}\mathrm{\hspace{0.17em}100}\mathrm{Km}/\mathrm{h})$, where $K$ is just an adjustable parameter. So if the truck is running too slow (e.g. uphill), ${p}_{set}$ will be positive (more gas to the engine) hence speed will increase to maintain the desired speed, so viceversa for the downhill case.
In this example, we have outlined all the major components of a typical control system:

•
Actuator: engine,

•
Sensor: speedometer,

•
Plant: truck,

•
Control input: gas paddle,

•
Control objective: 1000 Km in 10 hrs,

•
Control law: ${p}_{set}=K({v}_{current}100Km/h)$,
The science aspect of control systems is the study of design, synthesis and analysis of control systems using mathematical concepts, and the engineering aspect of controls systems is to implement, construct and adjust the control system according to reallife situation and limitations.
Title  control system 

Canonical name  ControlSystem 
Date of creation  20130322 14:22:59 
Last modified on  20130322 14:22:59 
Owner  ppirrip (5555) 
Last modified by  ppirrip (5555) 
Numerical id  9 
Author  ppirrip (5555) 
Entry type  Definition 
Classification  msc 93A10 
Related topic  SystemDefinitions 