FAILURE FREQUENCIES OF THE TRANSMITTER SYSTEM OF THE NIGERIAN TELEVISION AUTHORITY (NTA), UYO,

FAILURE FREQUENCIES OF THE TRANSMITTER SYSTEM OF THE NIGERIAN TELEVISION AUTHORITY (NTA), UYO,

CHAPTER ONE

GENERAL INTRODUCTION

1.0       INTRODUCTION/BACKGOUND OF THE STUDY

Modern technology has enabled the designs of many complex systems, whose operation or perhaps safety depends on the reliability of the various components making up the system, Udom (2010). Examples of such systems include mechanical, electrical and electronic equipment which are found in homes appliances, vehicles and in power generating plants.

According toMann(1973),a system is defined as a given equipment configuration. A system therefore, is a set of interdependent components forming an integrated whole, and which its components relationships differ from that of another.

A system shares within its components common characteristics such as structure, behavior and interconnectivity. Based on the type of components of a system, systems can be classified under repairable and non-repairable system.

A repairable system is a system whose components can be restored to satisfactory operation by any action after failure. The actions taken to restore these components include repairs and changes to the adjustable settings such as; adjustment of the chain valve and brake of a vehicle engine.

In the other hand, a non-repairable system is a system in which the individual component that fails is removed permanently from the system, while the system is restored to satisfactory operation by replacing the failed component with another one of better usage. In mechanical systemslike the vehicle engine, the following non-repairable  components can be found; gasket, oil seal, timing chain(fan belts for cars), ring, piston, crankshaft, connecting rod and etc. Also, in an electrical system such as the power generating system, the following are the various non-repairable components that could be found; oil filter, fuel filter, water separator, fuel separator, radiator coolant and etc. In an electronic system such as radios, televisions, laptops and mobile phones, the non-repairable components found include; capacitors, resistors, integrated circuits (ICs), transistors, monitoring screens and etc. The failure of one of these low cost non-repairable components, do cause the entire system to fail and the effect caused by the system’s failure may be far higher than the cost of the component itself. It is therefore quite important to know in advance,  when the failure of a system component is likely to occur, as this will help to improve the reliability of the said system, though difficult to detect.

According toSharma (2009), system failure is said to occur when a functional system becomes less effective or completely useless due to sudden breakdown or gradual deterioration in its efficiency. System failure may be gradual, that is; when the efficiency of the system keeps reducing or deteriorating with time and usage. It can also be sudden, that is; when the system suddenly and completely stops functioning after some time or period of usage.

Failure mode describes the specific manner or way by which a failure occurs in terms of failure of the item (being a part or (sub) system) function under investigation; it may generally describe the way the failure occurs. It shall at least clearly describe a (end) failure state of the item (or function in case of a Functional FMEA) under consideration. It is the result of the failure mechanism (cause of the failure mode). For example; a fully fractured axle, a deformed axle or a fully open or fully closed electrical contact are each a separate failure mode.

In reality, systems do not just fail without a cause. The various factors that initiate the mode of which failures occur are known as failure mechanism. For instance; poor development practices, incorrect assumptions with regard to system requirements, poor user interface, faulty hardware, inadequate user training/user error poor fit between systems and the organization, Defects in requirements, design, process, quality control, handling or part application, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time.

Consequently, system failure does cause a breakdown in production, like the failure of a pump in a refinery, which can close down the entire system thereby causing heavy losses such as loss in production, idle labor, wastages and other damages.System failure can also inflate the unreliability of that system. For example, if the condenser in an air craft fails, it can cause it to crash which will cause a great loss of both lives and properties. This can instill a lot of fear in the flying consumer population and in a greater percentage, regardless of the actual reliability data about the safety of air travel.

Though it seems very difficult to predict or detect the time that a particular component will fail, such uncertainties can be minimized by the understanding of the reliability of such systems. Hence, to talk about a system failure, its reliability must be treated with importance, as it is the basis from which failure rate can be derived.

Reliability describes the ability of a system to function under stated conditions for a specified period of time. It is the ability that when operating under stated environmental conditions, the system will perform its intended function adequately for a specified interval of time, Kapur (1941), as also defined by Udom (2010).

A reliability study is therefore concerned with random occurrences of undesirable events of failures during the lifetime of a physical system, Kapur (1941).The reliability period of any system is measured within the durability period of that system. In describing the reliability of a given system, it is necessary to specify the system’s failure process, describe how the components of the system are connected, provide its rule of operation and identify the state in which the system is classified as failed. The probabilistic models used for any of these are generally called “Time to Event” models, where an event is a failure. There are many indices used for reliability measures, which includes; availability, maintainability, mean time to failure (MTTF), mean time between failures (MTBF), and the mean time to repair (MTTR). Availability measures the percentage of effectiveness of the system within a given period of time. The maintainability is the probability that a system will be restored to a specified condition within a given period of time when the maintenance is performed in accordance with prescribed procedures and resources, Ebeling (1997).

In this work, we shall focus on theprobabilistic modeling of failure rate fornon-repairable systems.

 

 1.1        STATEMENT OF PROBLEM

System failure can cause a lot of damages and losses both to system, users and the consumer population of the system products and services. Yet, system failures are caused by very low cost components of the system due to lack of maintenance.  However, the cost of corrective replacement (replacement of failed system components) is quite higher than the cost of the preventive replacement of these components in the system.

Therefore, we seek to model the failure rate (time to failure) of non-repairable systems probabilistically, as it will help to specify the optimal life time of such systems and hence improve the reliability of the system.

1.2       AIMS AND OBJECTIVES OF THE STUDY

The basic aim of this research work is to model the failure rate of non-repairable systems using probabilistic techniques with the view to improving the reliability of such systems and develop a preventive replacement schedule for the systems.

The objectives are to:

  • Obtain the probability functions associated with reliability measures, such as the failure density function, the failure distribution function, the reliability function and the hazard function.
  • Obtain the reliability indices of the transmitter system which include; the mean time to failure (MTTF), the mean time between failures (MTBF), the mean time to repair (MTTR), the availability (AI), and the maintainability (M) of the system.

 

1.3      SCOPE OF THE STUDY

This study covers the failure frequencies of the transmitter system of the Nigerian Television Authority (NTA), Uyo, which in this work typifies a non-repairable system.

 

1.4       SOURCE/METHOD OF DATA COLLECTION

The data on failure frequencies used in this study were collected from the log book of the technical department of the NTA Uyo. Hence it is a secondary data. Also, the information on the causes and effects of system failuresof the transmitter was obtained through personal interview with the Head of Technical Department.

 

1.5    HISTORICAL BACKGROUND OF NTA UYO

The Nigerian Television Authority, Uyo was established in 1988 following the creation of AkwaIbom State and in pursuance of the Federal Government Policy of locating one NTA station in every state capital in the country.

The station commenced test transmission in October 1992 with a skeletal staff of 19, deployed from NTA calabar under the supervision of an Assistant Chief Engineer, Engr.EyoUmoh who was then assisted by Mr. Charles Udodom, an administrative officer then deployed from NTA headquarters, Lagos.

Full transmission commenced in 1993 with a 5kw Marconi transmitter on channel 12 under substantive General Manager, Mr.AhmaduAruwa who was later redeployed to NTA Jos in 1997.

In 1998, the station relocated to its permanent site along Aka Etinan road, Uyo from its temporary office at #7 Kevin Street, Uyo. Mr. Aruwa’s successor, Engr. Gregory Gbadamosi was later redeployed to NTA headquarters as Assistant Director Engineering in August 2000 while Mr. EsoEgbobamien took over as the manager in-charge from September to November 2000.

In November 2000, the manager of News/Current Affairs,Mrs. Christiana Obot,aone-time commissioner for information in the state was appointed the General Managerto become the first indigene/woman to head the station. The station was blessed to be amongst the few NTA stations to benefit from the allocation of new digital transmitters, which was successfully installed in 2001. With the installation of the new 5kw Rhodes and Schwartz transmitter, the station was boosted significantly and can now be received throughout the state extending to Rivers state in the West, Abia in the North and Cross River in the East.The station has been transmitting 24 hours in the last one year. The station’s present staff strength is 73 which are made up of managers, middle managers and operators.

 

1.6 ASSUMPTIONS OF THE STUDY

The basic assumptions associated with this study are that;

  • The failures in the system occur at random
  • Given the lifetime distribution of the system f(t), it is assumed that failure occurs at the end of time or period, say t.
  • The failures that occur at each time t are independent and continuous.
  • The normal routine preventive maintenance services on the system are still provided.
  • The failure of one component of the system causes the failure of the entire system.

 

 

 

1.7       HYPOTHESES OF THE STUDY

H0:                         The failure rate in the transmitter system islog-normally distributed.

H1                          The failure rate in the transmitter system is not log-normally distributed.

 

1.8       SIGNIFICANCE OF THE STUDY

This study would be of immense benefits to the NTA establishment and other users of transmitter in other establishments as it will provide a preventivereplacement schedule for the different components of the system. This will, if strictly adhered to, improve the reliability of the system for optimum use of the system.

1.9       DEFINITION OF TERMS

Some basic terms associated with probabilistic failure model are defined as follows;

  1. System: A set of interdependent components forming an integral whole that operate together
  2. Components: These are individual parts which combine with other parts to form a system.
  3. Failure: This is the state at which the system does not work or has stopped working under according to specification over a period of time.
  4. Failure rate: This is the frequency with which an engineered system fails.
  5. Deterioration: This can be defined as the failure which occurs progressively or gradual in nature. It is a type of failure which the system efficiency degenerates with time and usage.
  6. Sudden failure: This is a type of failure which occurs suddenly and completely. It does not involve deterioration, rather, the system stops working all of a sudden.
  7. Probability: This is the likelihood of occurrence or non-occurrence of an event. Hence, the probability of failure is the likelihood of occurrence or non-occurrence of failure in a system during a specified time.
  8. Failure density: This is the relationship between the number of failures in a system and the number of components in that system.
  9. Failure density function: This is the time interval to the first failure distribution. It is also known as “life time distribution”.
  10. Distribution function: This is the likelihood of a randomly selected system component failing at time t.
  11. Reliability: This is the likelihood of a system under stated conditions, performing adequately, its intended function for a specified interval of time.
  12. Reliability function: This is the likelihood of no failure in a system before time t. it is the ratio of the failure density function to the failure rate.
  13. Failure mode:The specific manner or way, in which a failure occurs in terms of failure of the item (being a part or (sub) system) function under investigation.
  14. Failure mechanism:Defects in requirements, design, process, quality control, handling or part applications, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time.
  15. Maintainability: The ease with which maintenance of a functional unit can be performed in accordance with prescribed requirements.
  16. Availability:This is the probability that a system will work as required when required during the particular period of a time.
  17. Mean time to repair: (MTTR) is the length of time required to restore operation to specification.

DOWNLOAD COMPLETE PROJECT MATERIAL