Tuesday, December 31, 2019

Measuring Crime - 1070 Words

Measuring Crime in the United States Kyra Pettit CJA/204 August 5, 2013 Dr. Wafeeq Sabir Measuring Crime in the United States In the following paper, these criminal justice students will address the three major points of crime measurement in the United States. Even though there may be changes of crime statistics, but not changes in the crime rate; that is because crime can be measured in numerous ways. Two measuring systems being the National Crime Victimization Survey (NCVS) and Uniform Crime Reports (UCR), report criminal information, but do not reflect it all. Because of the different factors that go into reporting crime some crimes go unreported. Instruments Crime is measured through statistics in the United States.†¦show more content†¦All data is collected from federal, state, and local automated records systems (FBI, 2011). Related Rates of Crime Crime rates and arrest rates are difficult for a law enforcement agency to produce to the high volume of calls received. Some examples of calls that do not require an arrest include lost pets, individuals needing medical assistance, and noise complaints. Each agency must make an organized effort to make contact with the individuals making the calls with high and low priority due to the unseen or unreported information at the caller’s location. The law enforcement community has also created an organization devoted to crime reports known as the Police Executive Research Forum (PERF) data is based on law enforcement agencies (Schmalleger, F. 2011, p. 3). Clearance rates are defined as cases that have been solved. Many times if an arrest was made then the case has been cleared. Some case are never cleared one example, if an offender commits a murder and he or she is found dead or flees the country before an arrest is made the data does not go into the cleared category. Recidiv ism rates are directly related to the quality of life after an offender is released from prison. Many offenders repeat crime related offenses due to the addiction of drugs. At times serious pathological offenders create a threat to the community therefore recidivism rates are often used in determining the punishment required for the offender. Myth versusShow MoreRelatedThe Crime Of Measuring Crime1560 Words   |  7 PagesMeasuring crime remains a difficult challenge despite years of research and reportable crime data. To say this problem has stumped policymakers and law enforcement professionals is an understatement. Each year statistics and crime data are collected using numerous means for study and released in public reports. The merits behind these collections are numerous as are the potential uses for the data. Of particular consideration are the Uniform Crime Reports and the National Incident-Based ReportingRead MoreMeasurement Of Crime And Measuring Crime2376 Words   |  10 PagesMeasurement of Crime Mosher, Miethe, and Hart (2011) note censuses were perhaps the earliest example of social measurement, and were used principally to conclude the number of males available to fight in the military, as well for tax purposes. The census taking then emerged to the establishment of a statistical database that served to analyze social and economic trends, and even develop policies in some instances. Officially, based on judicial data, the first national crime statistics were publishedRead MoreMeasuring Crime And Crime Statistics1876 Words   |  8 Pages Measuring crime helps criminal justice agencies to reflect on the effectiveness of policies in existence and correctly target resources. Crime statistics are therefore central to solving crime in England and Wales. In order for crime to be tackled effectively, it is vital that it is measured accurately. There are two sources of crime statistics published annually in the UK namely police recorded crime and the Crime Survey of England and Wales. According to the Crime Survey of England and WalesRead MoreMeasuring Crime Essay914 Words   |  4 PagesMeasuring Measuring Crime Crime measurement and statistics for police departments are very important when it comes to money allotment, staffing needs or termination and it is also used to determine the effectiveness of new laws and programs. There are three tools used to measure major crime in the United States: Uniform Crime Reports, National Crime Victimization Survey and the National Incident BasedRead MoreDefining and Measuring Crime1557 Words   |  7 PagesIntroduction To Crime, Defining and Measuring Crime Alistair Van Oudtshoorn Due Date: 23 April, 5:00pm Tutor: Thalia Edmonds Tutor Group: Wednesday 10:00am-10: 50am Table Of Contents. Introduction Page 3 Violent Crime Page 3 White-Collar Crime Page 3 Internet Crime Page 4 Property Crime Page 4 Conclusion Page 4 References Page 6 Crime has always been a shadow upon societies image, these learned behaviors can be seen in all shapes and sizes, in the cities, in the streets andRead MoreThe Usefulness of Ocs in Measuring Crime1380 Words   |  6 PagesDeviance is not usually a crime, but may be seen as one. Deviance is when someone breaks the norms and values of a society, but the act is not illegal. Crime is where a person beaks the law of land, they either do something they shouldn’t, or they don’t do something they should. Crime and deviance can be measured with the use of Official Crime Statistic (OCS). OCS is the crimes which are reported by the victim, and then further recorded by the Police. Not all crimes are recorded by the Police, whichRead MoreA Brief Note On The And Measuring Crime3129 Words   |  13 PagesOperationalising and Measuring Crime The phenomenon of crime can be seen globally throughout different societies and cultures in many different forms such as theft and murder, dictated by the state laws governing the land at the time, and is becoming more and more of an issue in modern society. With shows such as Crimewatch and the constant negative portrayal by the media, crime is becoming what is known as a moral panic , and raising awareness about different type of crimes as well as frenzy (CohenRead MoreDifficulties Involved in Defining and Measuring Crime and Deviance1049 Words   |  5 Pagesthe problems between crime and deviance, what counts as crime and deviance and how it varies with place and time. It will include the difference and similarities and give examples of defining crime and deviance. Finally the essay moves on to looking at how to identify why official statistics do not reflect in today’s society and may not be totally accurate. Crime is usually looked upon as an infringement of criminal law where as deviance has a vast and wider approach to crime and is consequentlyRead MoreMeasuring Crime966 Words   |  4 PagesCriminology Crime is usually committed by the criminals with no problem, but it’s what comes after the crime that the victim is faced with that is the hard part. Crimes are supposed to be reported but unfortunately, sometimes victims fail to report them. People often forget how important reporting crime is. Without these crime reports it becomes very difficult to collect crime data. However, for all the unreported crimes, there are reported crimes, as well. When crimes are reported, it becomesRead MoreOutline and assess the usefulness of official statistics in measuring crime676 Words   |  3 Pagesï » ¿ Sociology: Outline and assess the usefulness of official statistics in measuring crime. Crime is basically any sort of behaviour or an act which breaks laws of a society and is punished by the legal system. What is considered criminal or deviant is culturally determined. This means that what is considered criminal or deviant changes with time and place, as the values, norms and expectations change. What may not be acceptable in one society at a particular time may be acceptable in another country

Monday, December 23, 2019

Causes of Post Traumatic Stress Disorder Essay - 980 Words

Post traumatic stress disorder focus primarily on the way that the mind is affected by traumatic experiences. At least 50% of all adults and children are exposed to a psychologically traumatic event they either have been through war or have witnessed a death, threat to their life, bad accident, a bad natural disaster such as earthquake, tornado etc. PTSD is linked to structural neurochemical changes in the central nervous system which may have a direct biological effect on health, vulnerability to hypertension and atherosclerotic heart disease. How to cope with PTSD, as for myself I tend to avoid things that remind me of my event. I have tried talking to a counselor and to me I was having more difficulty letting go or dealing†¦show more content†¦That is the most hardest to deal with knowing I can’t really do anything to help. I still to this day find no fun in sports, or hiking, wakeboarding, four wheeling and all my old things I use to love to do. I am getting into basketball again but just shooting around that only last about 5 minutes. I have dealt with this for 6 years now and I will say it has got a lot better for me, but also very difficult to deal with. I feel that if I don’t think about my event I feel I am forgetting that person, so I think about the times we had and the love. I go into another deeper depression and it is even more difficult to get out of just that and deal with PTSD it self. The way for others who deal with PTSD they tend to distress and avoid after being exposed to a severely traumatic experience. They say this is normal and adaptive response and often includes reliving the event in thoughts, images, and dreams. I have also read stories about people using marijuana, ecstasy, and they say it helped. When I went to cocaine it helped me but after 3 months I hit rock bottom on my PTSD I started feeling guilty of life and that I was a waste. I started thinking crazy thoughts so I went to drinking that did not he lp me smoked marijuana that helped but I had to smoke like a chimney and for one I smoked so much I felt normal instead of high and all thoughts would come at once. Post traumatic stress will interfere with a person’s life and becomes hard to get use toShow MoreRelatedPost-Traumatic Stress Disorder: Causes Symptoms and Effects Essay1382 Words   |  6 Pagesor suffered from a Traumatic Brain Injury during Operation Iraqi Freedom or Operation Enduring Freedom. What this number does not include are the 39,365 cases of Post-Traumatic Stress Disorder (more commonly known as PTSD). (Department 2009) Although we usually think of war injuries as being physical, one of the most common war injuries is Post-Traumatic Stress Disorder, and the effects can be devastating to a redeploying soldier who has come in contact with severely traumatic experiences. PTSDRead MoreThe Causes of Post-Traumatic Stress Disorders Essay1807 Words   |  8 PagesIndividuals who experience stress usually are unaware of the symptoms that could lead into having serious post-traumatic stress disorders (PTSD). Some PTSD symptoms can contribute to a number of factors such as being delusional, wanting to be left alone, numb, and the feeling of being on the edge. These symptoms require some form of clinical assistance. Psychoanalysis determines patient’s problems with identifying critical stressors to avoid serious health complications. This paper will review twoRead MorePost Traumatic Stress Disorder980 Words   |  4 Pagesevents are the triggers that cause Post Traumatic Stress Disorder. Post-Traumatic Stress Disorder is an anxiety disorder that some people get after seeing or undergoing a dangerous event. There are various symptoms that begin to show or actions that can give a clear answer whether one may be diagnosed with this disorder. One of the many problems is that no age range is safe from suffering PTSD. One must ask themselves what set of events happened at that time to cause this disaster to occur and howRead More Sexual Assault Among Women In the United States Essay1239 Words   |  5 Pagesdegrees of depression, anxiety, and clinical stress. An issue to look at is how much control a victim of sexual assault has over her reaction. How much control can a woman have over repressing her emotions? How much of control does a woman have over her physical response to trauma? Furthermore what is the relationship between the mind and body? If a woman tries to repress her psychological response, does she develop a physical reaction? One type of disorder that develops among many women who have experiencedRead MorePtsd Is An Abbreviation For Post Traumatic Stress Disorder Essay1282 Words   |  6 Pagessymptoms of this disorder. They have proven themselves to be more than just an animal. Post-traumatic stress disorder dog s are far beyond just man’s best friend; they protect the protectors far after the wars end. First of all, what exactly is PTSD? PTSD is an abbreviation for post-traumatic stress disorder. One website says that the people that have this disorder developed it from being in a situation that was threatening or terrifying (â€Å"Post Traumatic Stress Disorder†). This disorder does not onlyRead MorePost Traumatic Stress Disorder ( Ptsd )1471 Words   |  6 PagesRunning head: POST-TRAUMATIC STRESS DISORDER 1 Post-Traumatic Stress Disorder Student’s Name Course Title School Name April 12, 2017 Post-Traumatic Stress Disorder Post-traumatic stress disorder is a mental disorder that many people are facing every day, and it appears to become more prevalent. This disorder is mainly caused by going through or experiencing a traumatic event, and its risk of may be increased by issuesRead MorePost Traumatic Stress Disorder ( Ptsd )1095 Words   |  5 PagesPTSD in Catcher in the Rye Post Traumatic Stress Disorder is most commonly thought of as an illness men and women acquire from experiences while serving in the wars. Some do not even know what it is or how much it affects people s lives. In the novel, The Catcher in the Rye, J.D. Salinger helps to convey what Post Traumatic Stress Disorder really is. PTSD is a curable condition triggered by a traumatic event with many types, causes, and symptoms displayed by Holden Caulfield. All of the peopleRead MoreEssay on Abstract Post-traumatic Stress824 Words   |  4 PagesPost –Traumatic Stress Disorder (PTSD) Melissa DiMichele Psychology 100 June 10, 2011 Abstract Post-Traumatic Stress Disorder also known as PTSD is an emotional condition that can develop following a terrifying or traumatic event. Post-Traumatic Stress Disorder also known as PTSD is an emotional condition that can develop following a traumatic or terrifying event. PTSD has only been recognized as a diagnosis since 1980. ThisRead MoreSupport System For Post Traumatic Stress Disorder1549 Words   |  7 PagesSystem to Post Traumatic Stress disorder Patient in U. S. Debora Anderson Augusta Technical College Running head: SUPPORT SYSTEM TO POST TRAUMATIC STRESS DISORDER PATIENT IN U. S. Debora Anderson Support System to Post Traumatic Stress disorder Patients in America Post-traumatic stress syndrome is an anxiety disorder that differs from other disorders due to its origin, or traumatic event. The severity, duration, and proximity to the event are some risk factors of the disorder. Post-traumaticRead MoreThe Effects Of Post Traumatic Stress Disorder On A Family1183 Words   |  5 PagesThe Effects of Post-Traumatic Stress Disorder on a Family The symptoms of Post-traumatic stress can vary from patient from patient. Most common symptoms are flashbacks, hyper arousal and avoidance. The first article is â€Å"Treatment of Posttraumatic Stress Symptoms in Adolescent Survivors of Childhood Cancer and Their Families: A Randomized Clinical Trial.† In this article the researchers put together a random wait list control trial. They would have an intervention with the family of a cancer survivor

Saturday, December 14, 2019

VHDL implementaion using spike sorting algorithm Free Essays

string(111) " terms of reliability or automatization\) over a simple window discriminator or a template matching algorithm\." Abstract This project implements an algorithm for VHDL implementation using spike sorting algorithm. This algorithm is based on k-means algorithm and. The main procedure includes three stages, writing the code in matlab, converting the code from matlab to VHDL, implementing the code in VHDL onto FPGA board. We will write a custom essay sample on VHDL implementaion using spike sorting algorithm or any similar topic only for you Order Now Firstly the given data of neurons is grouped into clusters using k-means algorithm in matlab to identify number of neurons in the given data and timing analysis at which ach neuron fires .Then, the parameters which areused in the matlab code are used and converted into VHDL using as. After that, the code is applied on FPGA board to find the power consumption. The simulation results demonstrate that the VHDL is implemented for Spike sorting algorithm. CHAPTER 1 INTRODUCTION Spike sorting – is the grouping of spikes into clusters based on the similarity of their shapes. Given that, in principle, each neuron tends to fire spikes of a particular shape, the resulting clusters correspond to the activity of different putative neurons. The end result of spike sorting is the determination of which spike corresponds to which of these neurons. The classification of which spike corresponds to which neuron- is a very challenging problem. Then, before tackling mathematical details and technical issues, it is important to discuss why we need to do such a job, rather than just detecting the spikes for each channel without caring from which neuron they come. A large amount of research in Neuroscience is based on the study of the activity of neurons recorded extra cellularly with very thin electrodes implanted in animals’ brains. These micro wires ‘listen’ to a few neurons close-by the electrode tip that fire action potentials or ‘spikes’. Each neuron has spikes of a characteristic shape, which is mainly determined by the morphology of their dendrite trees and the distance and orientation relative to the recording electrode It is already well established that complex brain processes are reflected by the activity of large neural populations and that the study of single-cells in isolation gives only a very limited view of the whole picture. Therefore, progress in Neuroscience relies to a large extent on the ability to record simultaneously from large populations of cells. The implementation of optimal spike sorting algorithms is a critical step forwards in this direction, since it can allow the analysis of the activity of a few close-by neurons from each recording electrode. This opens a whole spectrum of new possibilities. For example, it is possible to study connectivity patterns of close-by neurons or to study the topographical organization of a given area and discriminate the responses of nearby units. It is also possible to have access to the activity of sparsely firing neurons, whose responses may be completely masked by other neurons with high firing rates, if not properly sorted. Separating spars ely firing neurons from a background of large multi-unit activity is not an easy job, but this type of neurons can show striking responses. In principle, the easiest way to separate spikes corresponding to different neurons is to use an amplitude discriminator. This classification can be very fast and simple to implement on-line. However, sometimes spikes from different neurons may have the same peak amplitude but different shapes. Then, a relatively straightforward improvement is to use ‘windows discriminators’, which assign the spikes crossing one or several windows to the same neuron. This method is implemented in commercial acquisition systems and it is still one of the most preferred ways to do spike sorting. Window discriminators can be implemented on-line, but have the main disadvantage that they require a manual setting of the windows by the user, which may need readjustment during the experiment. For this reason it is in practice not possible to sort spikes of more than a few channels simultaneously with window discriminators1. Another major drawback of this approach is that in many cases spike shap es overlap and it is very difficult to set up windows that will discriminate them. This, of course, introduces a lot of subjectivity in the clustering procedure. Moreover, it is possible that sparsely firing neurons may be missed, especially if the particular input (or a particular behavior) that elicits the firing of the neuron is not present while the windows are set. Another simple strategy to do spike sorting is to select a characteristic spike shape for each cluster and then assign the remaining spikes using template matching. This method was pioneered by Gerstein and Clark, who implemented an algorithm in which the user selects the templates and the spikes are assigned based on a mean square distance metric. This procedure can be also implemented on-line, but, as the window discriminator, it has a few drawbacks. First, it requires user intervention, which makes it not practical for large number of channels. Second, the templates may have to be adjusted during the experiment2. Third, when the spike shapes overlap it may not be straightforward to choose the templates or to decide how many templates should be taken. Fourth, as for the window discriminator, it may miss sparsely firing neurons. Current acquisition systems allow the simultaneous recording of up to hundreds of channels simultaneously. This opens the fascinating opportunity to study large cell populations to understand how they encode sensory processing and behavior. The reliability of these data critically depends on accurately identifying the activity of the individual neurons with spike sorting. To deal with large number of channels, supervised methods as the ones described in this section are highly time consuming, subjective, and nearly impossible to be used in the course of an experiment. It is therefore clear that there is a need for new methods to deal with recordings from multiple electrodes. Surprisingly, the development of such methods has been lagging far behind the capabilities of current hardware acquisition systems. There are three main characteristics that these methods should have: i) They should give significant improvements (in terms of reliability or automatization) over a simple window dis criminator or a template matching algorithm. You read "VHDL implementaion using spike sorting algorithm" in category "Essay examples" Otherwise, their use seems not justified. ii) They should be unsupervised or at least they should offer a hybrid approach where most of the calculations are done automatically and the user intervention may be required only for a final step of manual supervision or validation. iii) They should be fast enough to give results in a reasonable time and, eventually, it should be possible to implement them on line. CHAPTER -2 LITERATURE REVIEW: The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local area. This article reviews algorithms and methods for detecting and classifying action potentials, a problem commonly referred to as spike sorting. The article first discusses the challenges of measuring neural activity and the basic issues of signal detection and classification. It reviews and illustrates algorithms and techniques that have been applied to many of the problems in spike sorting and discusses the advantages and limitations of each and the applicability of these methods for different types of experimental demands. The article is written both for the physiologist wanting to use simple methods that will improve experimental yi eld and minimize the selection biases of traditional techniques and for those who want to apply or extend more sophisticated algorithms to meet new experimental challenges. K-Means Clustering Clustering is the latest segmentation technique for the purpose of the data analysis in the form of an image or in the form of a signal that is in the form of the waveform. Image segmentation is an image analysis process that aims at partitioning an image into several regions according to a homogeneity criterion. Segmentation can be a fully automatic process, but it achieves its best results with semi-automatic algorithms, i.e. algorithms that are guided by a human operator. Most of the existing segmentation algorithms are highly specific to a certain type of data, and some research is pursued to develop generic frameworks integrating these techniques. Actually the word segmentation is nothing but the partitioning depending on the given algorithm. So therefore now the clustering is the latest segmentation technique in which the segmentation takes place depending on the similarity. Therefore clustering is nothing but proper placing of the group ofsimilar observations into the sub sets. Image segmentation is a very complex task, which benefits from computer assistance, and yet no general algorithm exists. This concept of semi-automatic process naturally involves an environment in which the human operator will interact with the algorithms and the data in order to produce optimal segmentations. In the medical field image segmentation has become of the essential tool for the purpose of the accurate processing. The latest part of the image segmentation is the clustering in which it allows the doctors to test the data accurately for the purpose of the diagnosis. Clustering is nothing but the grouping of the many computers and forming them in to a single computer. That is clustering is nothing but the dividing of the image or the signal and grouping of the similar data elements in to the similar values by naming the sets or the clusters. So finally cluster is nothing but consisting of data of the similar values or the respective elements. The term clustering can also be defined as whose members are similar in some way and these similar members are grouped together or bind together and termed as the cluster. That is the collection of the similar members are bind a side as a group and termed that as a cluster. Clustering is nothing but the collection of the unlabelled data and binding them in to one particular structure and that structure is termed as the cluster. Therefore a cluster is nothing but the collection of the similar objects in which the objects are similar are similar to each other in their respective cluster and they are different from the other set of the values or the clusters. And therefore the graphical representation of the clustering is shown in the following diagram: Figure show the example of the clustering approach for the given membership elements So therefore from the above example we can conclude that there the members are divided depending on their similarity and grouped them in to the four clusters in which the each cluster contains all the similar members in which they are inter related to one another and forming them in to the groups depending on the similarity and the members of the other clusters are not inter related to one another. So finally we conclude that from these example four sets or the clusters are different from one another. So one important example for the determination of the clustering is for example there are 52 data elements and the maximum number of possibility for the formation of the clusters is we can form maximum 52 number of the clusters but the main drawback of the formation of the large number of the clusters is it loose its accuracy and also it’s a huge time consuming process. So in order to overcome its drawback we go for the less number of the possibility and satisfying in terms of the time as well as the accuracy. So therefore if there are 52 numbers of the elements and dividing in to 4 clusters it may be an easy task for the process of the members in the terms of the accuracy as well as the time consuming process. And after grouping those into the clusters there are the two important tasks one is to find the centroid and the other one is the distance. Centroid is nothing but the distance between the center value and the respective element. Secondly distance is nothing but the mea surement of the distance between the respective value and the neighboring value. This is nothing but to check in which the elements are inter related to one another. And therefore the distance measurement can be shown or illustrated by the following example. In which the distance measurement takes place depending on the minute accuracy. Figure shows the calculation of the distance between the membership values or the membership elements Distance measurements between the data points are the important task that is analyzing from the cluster. If the components are all interrelated to one another in the same cluster then there is a possibility that there is a requirement of the small Euclidean distance for the formation of the cluster and also for the finding of the distance in between them. The problem arises from the mathematical formula which used to combine the distances between the single components of the data feature vectors into a single unique distance measure which may be used for the purpose of the clustering. Even in the odd cases or in the exceptional cases the Euclidean distance may also mislead. So all the things are taken in to the consideration. An ideal medical image segmentation scheme should possess some preferred properties such as minimum user interaction, fast computation, and accurate and robust segmentation results. Medical image segmentation plays an instrumental role in clinical diagnosis. In image segmentation, one challenge is how to deal with the nonlinearity of real data distribution, which often makes segmentation methods need more human interactions and make unsatisfied segmentation results. There are numerous types of classifications proposed in the specialized literature, each of which is relevant respectively to the point of view required by the study. Since this research project deals with medical image segmentation, where a large majority of the acquired data is grey-scaled, and all the techniques concerning color images will be left aside. The techniques are categorized into three main families: †¢ Edge based techniques. †¢ Region based techniques. †¢ Pixel based techniques So now therefore we have a brief overview on these following techniques. The classification can be encountered Neural networks segmentation. Region growing. Clustering. Probabilistic and Bayesian approaches. Tree/graph based approaches. Edge based segmentation. Histogram thresholding Actually histogram is nothing but the graphical representation of the pixel values. It is the simplest technique in which on comparing to the pixel based family. Histogram thresholding is considered in finding an acceptable threshold in the grey levels of the input image in order to separate the object(s) from the background. This kind of histogram is sometimes referred to as bimodal. Since the grey-levels histogram of an ideal image will clearly show two distinct peaks assailable to Gaussians representing the grey levels with respect to the background. Gaussian filtering is one of the methods for the finding of the threshold value from an image. And its graphical or the diagrammatical representation is given as follows Edge-based segmentation The simplest method of this type is known as detect and link. The algorithm first tries to detect local discontinuities and then tries to build longer ones by connecting them, hopefully leading to closed boundaries which circumscribe the objects in the image. The edge-based family of techniques tries to detect edges in an image so that the boundaries of the objects can be inferred. As a consequence, the image will not sharply split into regions. Some improvements for this method have been proposed in order to overcome this type of issue. Region-based segmentation Image region belonging to an object generally have homogeneous characteristics, The region growing algorithms start from well chosen seeds. They then expand the seed regions by annexing their homogeneous neighbors. The process is iterated until all the pixels in the image have been classified. The region-based family of techniques fundamentally aims at iteratively building regions in the image until a certain level of stability is reached. The region splitting algorithms use the entire image as a seed and split it into Regions until no more heterogeneity can be found. The shape of an object can be described in terms of its boundary or the region it occupies. Other region-based segmentation techniques Split-and-merge type Watershed type Split and merge type: In split-and-merge technique the process initiates as follows, In the first stage an image is first split into many small regions during the splitting stage as per the given criteria, and then the regions are merged depending on the enough similarity to produce the final segmentation. Watershed-based segmentation: In watershed-based segmentation the process initiates in the following way, the gradient magnitude image is considered as a topographic relief where the physical elevation represents the brightness value of each voxel correspondingly. An immersion type of approach is used for the calculation of the watersheds. The procedure results in a partitioning of the image in many catchment basins of which the borders define the watersheds. To reduce over-segmentation, the image is smoothed by 3D adaptive anisotropic diffusion prior to watershed operation. Semi-automatic merging of volume primitives returned by the watershed operation is then used to produce the final segmentation. The operation can be described by imagine that holes are pierced in each local minimum of the topographic relief. Then, the surface is slowly immersed in water, which causes a flooding of all the catchment basins, starting from the basin associated with the global minimum. As soon as two catchment basins begin to merge, a dam is built. In image processing and photography, The representation of the distribution of colors in an image is termed as a color image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image’s color space, the set of all possible colors. Color histograms are flexible constructs that can be built from images in various color spaces, whether RGB, rg chromaticity or any other color space of any dimension. The color histogram is a statistic that can be viewed as an approximation of an underlying continuous distribution of colors values. The color histogram can be built different kinds of color space images, although the term is more often used for three-dimensional representations or the three dimensional color space representations like RGB, NTSC or HSV . The term intensity histogram may be used instead of the monochromatic images. For multi-spectral images, where each pixel is represented by an arbitrary number of measurements (for example, beyond the three measurements in RGB), the color histogram is N-dimensional, with N being the number of measurements taken. Each measurement has its own range of the light spectrum of different wavelengths, some of which may lie around or the outside the visible spectrum. If the possible set of color values is sufficiently small, each of those colors may be placed on a range by itself; then the histogram is merely the count of pixels that have each possible color. Most often, the space is divided into an appropriate number of ranges, often arranged as a regular grid, each containing many similar color values. The color histogram may also be represented and displayed as a smooth function defined over the color space that approximates the pixel counts. A histogram of an image is produced first by discretization of the colors in the image into a number of bins, and counting the number of image pixels in each bin. For example, a Red–Blue chromaticity histogram can be formed by first normalizing color pixel values by dividing RGB values by R+G+B, then quantizing the normalized R and B coordinates into N bins each. In existing system the standard histograms are used, because of their efficiency and insensitivity to small changes, Standard histograms are widely used for content based image retrieval. But the main disadvantage of histograms is that many images of different appearances can have similar histograms because histograms provide coarse characterization of an image. Histogram refinement method further refines the histogram by splitting the pixels in a given bucket into several classes and producing the comparison graph of 8-bin (bucket) and 16 bins.Histogram refinement provides a set of features for proposed for Content Based Image Retrieval (CBIR). In this module first the RGB image is changed to grayscale image, also known as the intensity image, which is a single 2-D matrix containing values from 0 to 255. After the conversion from RGB to grayscale image, we perform quantization to reduce the number of levels in the image. We reduce the 256 levels to 16 levels in the quantized image by using uniform quantization. The segmentation is done by using color histograms. The most important unsupervised learning problem considered as the clustering because for any of the objects no information is provided about the â€Å"right answer†. Cluster analysis allows many choices about the nature of the algorithm for combining groups. It finds a reasonable structure in the data set depending on the classification on a set of observations in the data. neither the number of clusters nor the rules of assignment into clusters are known that is here the priori information about the data is not at all required. There are the two basic appr oaches for the purpose of the clustering that is one is called as the supervised data and the other one is the unsupervised data. If the input contains the labels then it is called as the supervisory data if the input doesn’t contain any form of the labels then it is termed as the unsupervised data. Clustering is nothing but the binding of the data with the similar characteristics. The similarity of objects is used for the division of the data into the groups. So therefore for the finding of the similarity of the objects we are going to use the distance function as the main criterion in the data set. The functions which are performed by the classifier methods without the use of the training data can be performed by the clustering algorithms. To compensate for the lack of training data, clustering methods alternate between segmenting the image and characterizing the properties of each class. In a sense, clustering methods train themselves, using the available data. The K-means clustering algorithm clusters data by iteratively computing a mean intensity for each class and segmenting the image by classifying each pixel in the class with the closest mean. Although clustering algorithms do not require training data, they do require an initial segmentation (or, equivalently, initial parameters). The posterior probabilities and computing maximum likelihood estimates of the means, covariance’s, and mixing coefficients of the mixture model. Similar to the classifier methods, clustering algorithms are not directly incorporate spatial modeling and can therefore be affected by the noise and intensity in homogeneities. It can be illustrated by the following example as follows: Figure below shows the brain region which is effected by the tumor region The number of classes was assumed to be three, representing (from dark gray to white) cerebrospinal fluid, gray matter, and white matter. The fuzzy c-means algorithm generalizes the K-means algorithm, allowing for soft segmentations based on fuzzy set theory. The EM algorithm applies the same clustering principles with the underlying assumption that the data follow a Gaussian mixture model. The main aim of the K-means clustering is to form the ‘k’ clusters from the ‘m’ number of the data elements. Where in the ‘k’ clusters the data or the members are should be similar to one another. And after the finding of the ‘k’ number of the clusters then the cross check takes place that is the finding of the centroid and the distance takes place. Centroid is nothing but the finding the distance between the centre of the data or the centre of the membership value and the relative value or the respective value. Then coming towards the distance it is nothing but the finding of the distance between the neighboring pixel values with respect to the original membership value. Distance is the term it is nothing but used for the finding of the correlation in between the pixel values. That is also nothing but for the finding of the similarity in between the pixel values. CLUSTER ANALYSIS Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters. Cluster analysis has been widely used in numerous applications, including pattern recognition, data analysis, image processing, and market research. By clustering, one can identify dense and sparse regions and therefore, discover overall distribution patterns and interesting correlations among data attributes. As a branch of statistics, cluster analysis has been studied extensively for many years, focusing mainly on distance-based cluster analysis. Cluster analysis tools based on k-means, k-medoids, and several other methods have also been built into many statistical analysis software packages or systems, such as S-Plus, SPSS, and SAS. In machine learning, clustering is an example of unsupervised learning. Unlike classification, clustering and unsupervised learning do not rely on predefined classes and class-labeled training examples. For this reason, clustering is a form of learning by observation, rather than learning by examples. In conceptual clustering, a group of objects forms a class only if it is describable by a concept. This differs from conventional clustering, which measures similarity based on geometric distance. Conceptual clustering consists of two components: (1) it discovers the appropriate classes, and (2) it forms descriptions for each class, as in classification. The gu ideline of striving for high intra class similarity and low interclass similarity still applies. An important step in most clustering is to select a distance measure, which will determine how the similarity of two elements is calculated. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and farther away according to another. For example, in a 2-dimensional space, the distance between the point (x = 1, y = 0) and the origin (x = 0, y = 0) is always 1 according to the usual norms, but the distance between the point (x = 1, y = 1) and the origin can be 2, or 1 if you take respectively the 1-norm, 2-norm or infinity-norm distance. Common distance functions: The Euclidean distance (also called distance as the crow flies or 2-norm distance). A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance. The Manhattan distance The maximum norm The Mahalanobis distance corrects data for different scales and correlations in the variables The angle between two vectors can be used as a distance measure when clustering high dimensional data. The Hamming distance measures the minimum number of substitutions required to change one member into another. Another important distinction is whether the clustering uses symmetric or asymmetric distances. Many of the distance functions listed above have the property that distances are symmetric. Future Enhancement ?This classification of feature set can be enhanced to heterogeneous (shape, texture) so that we can get more accurate result. ?It can also enhance to merging of heterogeneous features and neural network. ?The schemes proposed in this work can be further improved by introducing fuzzy logic concepts into the clustering process. CENTROID-BASED TECHNIQUE: THE K-MEANS METHOD The k-means algorithm takes the input parameter, k, and partitions a set of n objects into k clusters so that the resulting intra cluster similarity is high but the inter cluster similarity is low. Cluster similarity is measured in regard to the mean value of the objects in a cluster, which can be viewed as the cluster’s center of gravity. â€Å"How does the k-means algorithm work† The k-means algorithm proceeds as follows. First, it randomly selects k of the objects, each of which initially represents a cluster mean or center. For each of the remaining objects, an object is assigned to the cluster to which it is the most similar, based on the distance between the object and the cluster mean. It then computes the new mean for each cluster. This process iterates until the criterion function converges. Typically, the squared-error criterion is used, defined as Where E is the sum of square-error for all objects in the database, p is the point in space representing a given object, and mi is the mean of cluster ci (both p and mi are multidimensional). This criterion tries to make the resulting k clusters as compact and as separate as possible. The algorithm attempts to determine k partitions that minimize the squared error function. It works well when the clusters are compact clouds that are rather well separated from one another. The method is relatively scalable and efficient in processing large data sets because the computational complexity of the algorithm is O, where n is the total number of objects, k is the number of clusters, and t is the number of iterations. Normally, k n and t n. The method often terminates at a local optimum. The k-means method, however, can be applied only when the mean of a cluster is defined. This may not be the case in some applications, such as when data with categorical attributes are involved. The necessity for users to specify k, the number of clusters, in advance can be seen as a disadvantage. The k-means method is not suitable for discovering clusters with non convex shapes or clusters of very different size. Moreover, it is sensitive to noise and outlier data points since a small number of such data can substantially influence the mean value Suppose that there is a set of objects located in space, Let k = 2; that is, the user would like to cluster the objects into two clusters. According to the algorithm, we arbitrarily choose two objects as the two initial cluster centers, where cluster centers are marked by a â€Å"+†. Each object is distributed to a cluster based on the cluster center to which it is the nearest. Such a distribution forms silhouettes encircled by dotted curves, as shown in Fig. This kind of grouping will update the cluster centers. That is, the mean value of each cluster is recalculated based on the objects in the cluster. Relative to these new centers, objects are redistributed to the cluster domains based on which cluster center is the nearest. Such redistribution forms new silhouettes encircled by dashed curves, as shown in Fig. Eventually, no redistribution of the objects in any cluster occurs and so the process terminates. The resulting clusters are returned by the clustering process. K-MENS CLUSTERING ALGORITHM Algorithm: k-means. The k-means algorithm for partitioning based on the mean value of the objects in the cluster. Input: The number of clusters k and a database containing n objects. Output: A set of k clusters that minimizes the squared-error criterion. Method: arbitrarily choose k objects as the initial cluster centers: repeat (re)assign each object to the cluster to which the object is the most similar, based on the mean value of the objects in the cluster; Update the cluster means, i.e., calculate the mean value of the objects for each cluster; Until no change. The purpose of K-mean clustering is to classify the data. We selected K-means clustering because it is suitable to cluster large amounts of data. K-means creates a single level of clusters unlike hierarchical clustering method’s tree structure. Each observation in the data is treated as an object having a location in space and a partition is found in which objects within each cluster are as close to each other as possible, and as far from objects in other clusters as possible. Selection of distance measure is an important step in clustering. Distance measure determines the similarity of two elements. It greatly influences the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. We selected to use quadratic distance measure which provides the quadratic between the various features. We calculated the distance between all the row vectors of our feature set obtained from previous section, hence finding similarity between every pair of objects in the data set. The result is a distance matrix. Next, we used the member objects and the centroid to define each cluster. The centroid for each cluster is the point to which the sum of distances from all objects in that cluster is minimized. The distance information generated above is utilized to determine the proximity of objects to each other. The objects are grouped into K clusters using the distance between the centroids of the two groups. Let Op is the number of objects in cluster p and Oq is the number of objects in cluster q, dpi is the (i)thobject in cluster pand dqjis the jth object in cluster q. The centroid distance between the two clusters p and q is given as: Where VHDL Design entities and configurations The design entityis the primary hardware abstraction in VHDL. It represents a portion of a hardware design that has well-defined inputs and outputs and performs a well-defined function. A design entity may represent entire system, a subsystem, a board, a chip, a macro-cell, a logic gate, or any level of abstraction in between. A configuration can be used to describe how design entities are put together to form a complete design. A design entity may be described in terms of a hierarchy of blocks, each of which represents a portion of the whole design. The top-level block in such a hierarchy is the design entity itself; such a block is an external block that resides in a library and may be used as a component of other designs. Nested blocks in the hierarchy are internal blocks, defined by block statements Entity declarations: An entity declaration defines the interface between a given design entity and the environment in which it issued. It may also specify declarations and statements that are part of the design entity. A given entity declaration may be shared by many design entities, each of which has a different architecture. Thus, an entity declaration can potentially represent a class of design entities, each with the same interface. entity_declaration ::= entity identifier is entity_header entity_declarative_part [begin entity_statement_part ] end [ entity ] [ entity_simple_name ] ; Generics: Generics provide a channel for static information to be communicated to a block from its environment. The following applies to both external blocks defined by design entities and to internal blocks defined by block statements. generic_list ::= generic_interface_list The generics of a block are defined by a generic interface list. Eachinterface element in such a generic interface list declares a formal generic. Ports: Ports provide channels for dynamic communication between a block and its environment. port_list ::= port_interface_list Architecture bodies: An architecture body defines the body of a design entity. It specifies the relationships between the inputs and outputs of a design entity and may be expressed in terms of structure, dataflow, or behavior. Such specifications may be partial or complete. architecture_body ::= architecture identifier of entity_name is architecture_declarative_part Begin architecture_statement_part end [ architecture ] [ architecture_simple_name ] ; Subprograms and Package Subprogram declarations: A subprogram declaration declares a procedure or a function, as indicated by the appropriate reserved word. subprogram_declaration ::= subprogram_specification ; subprogram_specification ::= procedure designator [ ( formal_parameter_list ) ] | [ pure | impure ] function designator [ ( formal_parameter_list ) ] return type_mark The specification of a procedure specifies its designator and its formal parameters (if any). The specification of a function specifies its designator, its formal parameters (if any), the subtype of the returned value (the result subtype), and whether or not the function is pure. A function is impure if its specification contains the reserved word impure; otherwise, it is said to be pure. A procedure designator is always an identifier. A function designator is either an identifier or an operator symbol Subprogram bodies: A subprogram body specifies the execution of a subprogram. subprogram_body ::= subprogram_specification is subprogram_declarative_part begin subprogram_statement_part end [ subprogram_kind ] [ designator ] ; Package declarations: A package declaration defines the interface to a package. The scope of a declaration within a package can be extended to other design units. package_declaration ::= package identifier is package_declarative_part end [ package ] [ package_simple_name ] ; Package bodies A package body defines the bodies of subprograms and the values of deferred constants declared in the interface to the package. package_body ::= package body package_simple_name is package_body_declarative_part end [ package body ] [ package_simple_name ] ; Data Types: Scalar Types: Scalar type can be classified into four types. They are — Enumeration — Integer — Physical — Floating Point Enumeration types: An enumeration type definition defines an enumeration type. enumeration_type_definition ::= ( enumeration_literal { , enumeration_literal } ) enumeration_literal ::= identifier | character_literal Integer types: An integer type definition defines an integer type whose set of values includes those of the specified range. integer_type_definition ::= range_constraint. Physical types: Values of a physical type represent measurements of some quantity. Any value of a physical type is an integral multiple of the primary unit of measurement for that type. physical_type_definition ::= range_constraint units primary_unit_declaration { secondary_unit_declaration } end units [ physical_type_simple_name ] Floating point types: Floating point types provide approximations to the real numbers. Floating point types are useful for models in which the precise characterization of a floating point calculation is not important or not determined. floating_type_definition ::= range_constraint Composite types: Composite types are used to define collections of values. These include both arrays of values (collections of values of a homogeneous type) and records of values (collections of values of potentially heterogeneous types). Array types An array object is a composite object consisting of elements that have the same subtype. The name for an element of an array uses one or more index values belonging to specified discrete types. The value of an array object is a composite value consisting of the values of its elements unconstrained_array_definition ::= array ( index_subtype_definition { , index_subtype_definition } ) of element_subtype_indication constrained_array_definition ::= array index_constraint of element_subtype_indication Record types: A record type is a composite type, objects of which consist of named elements. The value of a record object is a composite value consisting of the values of its elements. record_type_definition ::= record element_declaration { element_declaration } end record [ record_type_simple_name ] Access types: An object declared by an object declaration is created by the elaboration of the object declaration and is denoted by a simple name or by some other form of name. In contrast, objects that are created by the evaluation of allocators (see 7.3.6) have no simple name. Access to such an object is achieved by an access value returned by an allocator; the access value is said to designate the object. access_type_definition ::= access subtype_indication File types A file type definition defines a file type. File types are used to define objects representing files in the host system environment. The value of a file object is the sequence of values contained in the host system file. file_type_definition ::= file of type_mark Data Objects: Object declarations An object declaration declares an object of a specified type. Such an object is called an explicitly declared object. Constant declarations A constant declaration declares a constant of the specified type. Such a constant is an explicitly declared constant. constant_declaration ::= constant identifier_list : subtype_indication [ := expression ] ; If the assignment symbol â€Å":=† followed by an expression is present in a constant declaration, the expression specifies the value of the constant; the type of the expression must be that of the constant. The value of a constant cannot be modified after the declaration is elaborated. Signal declarations A signal declaration declares a signal of the specified type. Such a signal is an explicitly declared signal. signal_declaration ::= signal identifier_list : subtype_indication [ signal_kind ] [ := expression ] ; signal_kind ::= register | bus Variable declarations A variable declaration declares a variable of the specified type. Such a variable is an explicitly declared variable. variable_declaration ::= [ shared ] variable identifier_list : subtype_indication [ := expression ] ; File declarations A file declaration declares a file of the specified type. Such a file is an explicitly declared file. file_declaration ::= file identifier_list : subtype_indication [ file_open_information ] ; Operators: Logical Operators: The logical operators and, or, nand, nor, xor, xnor, and not are defined for predefined types BIT and BOOLEAN. They are also defined for any one-dimensional array type whose element type is BIT or BOOLEAN. For the binary operators and, or, nand, nor, xor, and xnor, the operands must be of the same base type. Moreover, for the binary operators and, or, nand, nor, xor, and xnor defined on one-dimensional array types, the operands must be arrays of the same length, the operation is performed on matching elements of the arrays, and the result is an array with the same index range as the left operand. Relational Operators. Relational operators include tests for equality, inequality, and ordering of operands. The operands of each relational operator must be of the same type. The result type of each relational operator is the predefined type BOOLEAN. Operator Operation Operand Type Result Type= EqualityAny Type Boolean /= InequalityAny Type Boolean Less ThanAny ScalarType or Descrete type Boolean= Less Than or EqualAny ScalarType or Descrete type Boolean GreaterThanAny ScalarType or Descrete type Boolean =Greater Than or EqualAny ScalarType or Descrete type Boolean Shift Operators. The shift operators sll, srl, sla, sra, rol, and ror are defined for any one-dimensional array type whose element type is either of the predefined types BIT or BOOLEAN. Operator OperationLeft operand TypeRight operand TypeResult type sll Shift left LogicalAny one-dimensional array type whose element type is BIT or BOOLEAN INTEGER Same as left srl Shift right LogicalAny one-dimensional array type whose element type is BIT or BOOLEAN INTEGER Same as left sla Shift left arithmeticAny one-dimensional array type whose element type is BIT or BOOLEAN INTEGER Same as left sraShift right arithmeticAny one-dimensional array type whose element type is BIT or BOOLEANINTEGER Same as left rolRotate left LogicalAny one-dimensional array type whose element type is BIT or BOOLEANINTEGER Same as left ror Rotate right LogicalAny one-dimensional array type whose element type is BIT or BOOLEAN INTEGER sSame as left Adding Operators. The adding operators + and ? are predefined for any numeric type and have their conventional mathematical meaning. The concatenation operator is predefined for any one-dimensional array type. Operator Operation Left operand TypeRight operand TypeResult Type +AdditionAny numeric type Same type Same type –SubtractionAny numeric type Same typeSame type ConcatenationAny array type Same array typeSame array type Any array typeSame element typeSame array typeThe element type Any array typeSame array typeThe element typeAny element typeAny array type Multiplying Operators: The operators * and / are predefined for any integer and any floating point type and have their conventional mathematical meaning; the operators mod and rem are predefined for any integer type. For each of these operators, the operands and the result are of the same type. Operator Operation Left operand Type Right operand Type Result Type * Multiplication Any integertype Same type Same type Any floating point type Same type Same type / Division Any integer type Same type Same type Any floating point type Same type Same type mod Modulus Any integer type Same type Same type rem Remainde Any integer type Same type Same type Miscellaneous operators: The unary operator abs is predefined for any numeric type. OperatorOperation Operand type Result type absAbsolute valueAny numeric typeSame numeric type The exponentiating operator ** is predefined for each integer type and for each floating point type. In either case the right operand, called the exponent, is of the predefined type INTEGER. Operator OperationLeft operand Type Right operand TypeResult Type ** ExponentiationAny integer typeINTEGERSame as leftAny floating point typeINTEGERSame as left In VHDL mainly there are three types modeling styles.These are 1.Behaviorial Modeling. 2. Data FlowModeling. 3. Structural Modeling. Behaviorial Modeling: Process statement A process statement defines an independent sequential process representing the behavior of some portion of thedesign. process_statement ::= [ process_label : ] [ postponed ] process [ ( sensitivity_list ) ] [ is ] process_declarative_part begin process_statement_part end [ postponed ] process [ process_label ] ; where the sensitivity list of the wait statement is that following the reserved word process. Such a process statement must not contain an explicit wait statement. Similarly, if such a process statement is a parent of a procedure, then that procedure may not contain a wait statement. Sequential statements: The various forms of sequential statements are described in this section. Sequential statements are used to define algorithms for the execution of a subprogram or process; they execute in the order in which they appear. Wait statement The wait statement causes the suspension of a process statement or a procedure. wait_statement ::= [ label : ] wait [ sensitivity_clause ] [ condition_clause ] [ timeout_clause ] ; sensitivity_clause ::= on sensitivity_list sensitivity_list ::= signal_name { , signal_name } condition_clause ::= until condition condition ::= boolean_expression timeout_clause ::= for time_expression Assertion statement: An assertion statement checks that a specified condition is true and reports an error if it is not. assertion_statement ::= [ label : ] assertion ; assertion ::= assert condition [ report expression ] [ severity expression ] Report statement: A report statement displays a message. report_statement ::= [ label : ] report expression [ severity expression ] If statement: An if statement selects for execution one or none of the enclosed sequences of statements, depending on the value of one or more corresponding conditions. if_statement ::= [ if_label : ] if condition then sequence_of_statements { elsif condition then sequence_of_statements } [ else sequence_of_statements ] end if [ if_label ] ; If a label appears at the end of an if statement, it must repeat the if label. For the execution of an if statement, the condition specified after if, and any conditions specified after elseif, are evaluated in succession (treating a final else as elsif TRUE then) until one evaluates to TRUE or all conditions are evaluated and yield FALSE. If one condition evaluates to TRUE, then the corresponding sequence of statements is executed; otherwise, none of the sequences of statements is executed. Case statement: A case statement selects for execution one of a number of alternative sequences of statements; the chosen alternative is defined by the value of an expression. case_statement ::= [ case_label : ] case expression is case_statement_alternative { case_statement_alternative } end case [ case_label ] ; case_statement_alternative ::= when choices = sequence_of_statements The expression must be of a discrete type, or of a one-dimensional array type whose element base type is a character type. This type must be determinable independently of the context in which the expression occurs, but using the fact that the expression must be of a discrete type or a one-dimensional character array type. Each choice in a case statement alternative must be of the same type as the expression; the list of choices specifies for which values of the expression the alternative is chosen. Loop statement: A loop statement includes a sequence of statements that is to be executed repeatedly, zero or more times. loop_statement ::= [ loop_label : ] [ iteration_scheme ] loop sequence_of_statements end loop [ loop_label ] ; iteration_scheme ::= while condition | for loop_parameter_specification parameter_specification ::= identifier in discrete_range Next statement: A next statement is used to complete the execution of one of the iterations of an enclosing loop statement (called loop in the following text). The completion is conditional if the statement includes a condition. next_statement ::= [ label : ] next [ loop_label ] [ when condition ] ; Exit statement: An exit statement is used to complete the execution of an enclosing loop statement (called loop in the following text). The completion is conditional if the statement includes a condition. exit_statement ::= [ label : ] exit [ loop_label ] [ when condition ] ; Return statement A return statement is used to complete the execution of the innermost enclosing function or procedure body .return_statement ::= [ label : ] return [ expression ] ; Null statement A null statement performs no action. null_statement ::= [ label : ] null ; Data Flow Modeling: The various forms of concurrent statements are described in this section. Concurrent statements are used to define interconnected blocks and processes that jointly describe the overall behavior or structure of a design. Concurrent statements execute asynchronously with respect to each other. Block statement: A block statement defines an internal block representing a portion of a design. Blocks may be hierarchically nested to support design decomposition. block_statement ::= block_label : block [ ( guard_expression ) ] [ is ] block_header block_declarative_part begin block_statement_part end block [ block_label ] ; If a guard expression appears after the reserved word block, then a signal with the simple name GUARD of predefined type BOOLEAN is implicitly declared at the beginning of the declarative part of the block, and the guard expression defines the value of that signal at any given time (see 12.6.4). The type of the guard expression must be type BOOLEAN. Signal GUARD may be used to control the operation of certain statements within the block (see 9.5). Concurrent procedure call statements: A concurrent procedure call statement represents a process containing the corresponding sequential procedure call statement. concurrent_procedure_call_statement ::= [ label : ] [ postponed ] procedure_call ; For any concurrent procedure call statement, there is an equivalent process statement. The equivalent process statement is a postponed process if and only if the concurrent procedure call statement includes the reserved word postponed. Concurrent assertion statements: A concurrent assertion statement represents a passive process statement containing the specified assertion statement. concurrent_assertion_statement ::= [ label : ] [ postponed ] assertion ; Concurrent signal assignment statements A concurrent signal assignment statement represents an equivalent process statement that assigns values to signals. concurrent_signal_assignment_statement ::= [ label : ] [ postponed ] conditional_signal_assignment | [ label : ] [ postponed ] selected_signal_assignment Conditional signal assignments: The conditional signal assignment represents a process statement in which the signal transform is an if statement. target = options waveform1 when condition1 else waveform2 when condition2 else waveform3 when condition3 else ————– ————— waveformN-1 when condition-1 else waveformN when conditionN; Selected signal assignments: The selected signal assignment represents a process statement in which the signal transform is a case statement. with expression select target = options waveform1 when choice_list1 , waveform2 when choice_list2 , waveform3 when choice_list3, ————– ————— waveformN-1 when choice_listN-1, waveformN when choice_listN ; Structural Modeling: Component declarations: A component declaration declares a virtual design entity interface that may be used in a component instantiation statement. A component configuration or a configuration specification can be used to associate a component instance with a design entity that resides in a library. component_declaration ::= component identifier [ is ] [ local_generic_clause ] [ local_port_clause ] end component [ component_simple_name ] ; Each interface object in the local generic clause declares a local generic. Each interface object in the local port clause declares a local port.If a simple name appears at the end of a component declaration, it must repeat the identifier of the component declaration. Component instantiation statements: A component instantiation statement defines a subcomponent of the design entity in which it appears, associates signals or values with the ports of that subcomponent, and associates values with generics of that subcomponent. This subcomponent is one instance of a class of components defined by a corresponding component declaration, design entity, or configuration declaration. component_instantiation_statement ::= instantiation_label : instantiated_unit [ generic_map_aspect ] [ port_map_aspect ] ; instantiated_unit ::= [ component ] component_name | entity entity_name [ ( architecture_identifier ) ] | configuration configuration_name FPGA (Field programmable gate array) FPGA is nothing but the field programmable gate arary. Irt is a ki8nd of the integrated circuit it is designed depending on the users choice and functionally used for the purpose of the operability. Actually in this FPGA (Field programmable gate array) there we are going to use the HDL (Hardware description language) for the purpose of the operability. Theb functionality of the FPGA is similar to that of the appliaction specific integrated circuit (ASIC) and these circuit diagrams are used to specify the circuit diagrams and these FPGA consists of the logic components or the logic gates and allows the blocs to be wired together (AND, OR, NAND, NOR, NOT etc.,) and it also includes the combinational functions and also the memory elements which may be of the formof the simple flip flops respectively. this is also used to implement the logic functions in the similar way to that of the ASIC performs. And also redesigning is also possible in this and is easy to implement wityh low cost com paring towards the others respectively and this offer an advantage for the purpose of the many applications. Some FPGA has the analog features includes the slew rate and drive strenth of each output pin. Another relatively common analog feature is differential comparators on input pins designed to be connected to differential signalingchannels. FPGA contains the analog to digital converter and also the digital to analog converter which is integrated in to the chip and allowing them to operate in the system on chip. Such devices blur the line between the filed programmable gate array and the field programmable analog array and these are internally fabricated in to the FPGA processor depending on the users choice. So therefore this FPGA contains the the logic devices starting from the programmable read only memory to the programmable logic devices. Programmable logic has a connection oriented in between the logic gates. . Figure shows the expanded view of typical FPGA FPGA is a collection of the configurable logic blocs that can be connected with an vast matrix inter connection and formed iin to a complex digital circuit. It is mainly used in the high speed digital applications where the design is given the more importance rather than the cost. With the rapid increse in the integration and the the decrease in the price may leading to the large usage of the FPGA in the market in this advanced technology. In the next digital revolutions FPGA (Field Programmable gate array) as well as the CPLD (Complex programmable logic device) are coming in to the existance for the dynamic development of the digital systems in the same way as the micro processors did in the embedded sytems. The developers can design their circuits using either a diagramatic based techniques VHDL, Verilog or any combinationof these techniques depending up on their simplicity and their efficiency and in terms if the usage. Now a days FPGA are becoming staple in the modern designs. Sop therefore depending on the application of the usage the components and their necessary codes are dumped for the accurate usage of the FPGA (Field Programmable gate array). The diagramatic representation which are going to be used in the FPGA are should be converted either in to the VHDL or the verilog. Depending on the necessity and then used for the purpose of the compilation. Therefore the connectivity between the modules which are going to be embedd on the FPGA kit are depending on the either physical connectivity or the logical connectivity and are shown by the following diagrametic representation. Figure shows the reprentation of the physical linking as well as the logical linking respectively Therefore the connectivity starts or begins with an entity declaration in the VHDL (verilog hardware description language) or the verilog followed by the VHDL module or the verilog entity parameter. Figure shows the connectivity between the modules IN coming towards the digital design buses play an important role for the purpose of the connectivity of the nets in the digital design and these buses play an important role in managing the nets and display the design in a more readable form. Figure shows the representation of the bus joiners So in the recent technology these FPGA and with the help of the logic blocs are inter connected with the embedded micro processor to form a complete programmable chip. CONCLUSION Spike sorting is a very challenging mathematical problem that has attracted the attention of scientists from different fields. It is indeed an interesting problem for researchers working on signal processing, especially those dealing with pattern recognition and machine learning techniques. It is also crucial for neurophysiologists, since an optimal spike sorting can dramatically increase the number of identified neurons and may allow the study of very sparsely firing neurons, which are hard to find with basic sorting approaches. Given the extraordinary capabilities of current recording systems – allowing the simultaneous recording from dozens or even hundreds of channels – there is an urgent need to develop and optimize methods to deal with the resulting massive amounts of data. The reliable identification of the activity of hundreds of simultaneously recorded neurons will play a major role in future developments in Neuroscience. In this article we gave a brief description of how to tackle the main issues of spike sorting. However, there are still many open problems, like the sorting of overlapping spikes, the identification of bursting cells and of nearly silent neurons, the development of robust and completely unsupervised methods, how to deal with non-stationary conditions, for example, due to drifting of the electrodes, how to quantify the accuracy of spike sorting outcomes, how to automatically distinguish single-units from multi-units, etc. One of the biggest problems for developing optimal spike sorting algorithms is that we usually do not have access to the â€Å"ground truth†. In other words, we do not have the exact information of how many neurons we are recordings from and which spike correspond to which neuron. The challenge is then to come up with realistic simulations and clever experiments -as the ones described in the previous section- that allow the exact quantification of performance and comparison of different spike sorting methods. How to cite VHDL implementaion using spike sorting algorithm, Essay examples

Friday, December 6, 2019

Confidentiality and Minors free essay sample

Confidentiality is an essential component to the counseling process. It allows for the client to build a trustful relationship with the counselor. â€Å" Counselors regard the promise of confidentiality to be essential for the development of client trust† (Glosoff Pate, 2002). Most individuals that seek counseling services assume that what is discussed in the counseling sessions with the counselor will be kept in confidence with limited exceptions. These exceptions become a complex balancing act for the counselor especially when their clients are minors. Confidentiality is a widely held ethical standard a variously accorded legal right of clients and responsibility of counselors (American Counseling Association, 2005: American School Counseling Association, 2010). According to the Ethical Standards for School Counselors and the Code of Ethics and Standards for Counseling (2010), both specify that counselors are ethically required to take appropriate action and breach confiden tiality in certain circumstances involving minors. Counselors are required to breach confidentiality if there is imminent danger to self and others, if there is suspected child abuse or neglect or to protect a vulnerable client from danger. There are other limitations to confidentiality and minors as well. Some of these limitations involve parents and their right to know what is happening in counseling sessions between the therapist and their child. This problem is one that schools counselors and clinical therapists must face when counseling minors. Counselors in both clinical and school settings are faced with ethical issues with regards to confidentiality each time they encounter a client that is a minor. School Counselors have a variety of roles and responsibilities to students, teachers, parents and administrators (Iyer, McGregor Connor, 2010). According to the American School Counseling Association (2004), it is the responsibility of the school counselor to help a child develop effective coping skills, identify personal strengths and assets, recognize and express feelings and provide a foundation for the child’s personal and social growth as he or she progresses from school to adulthood as apart of the process. School Counselors must collaborate with all persons involved with the minor in this process, which usually includes the parents and teachers. School Counselors are also sometimes asked to be apart of child study teams within the school, which can be very beneficial to the students and those involved in their lives. School Counselors must follow the American School Counseling Association’s ethical standards for School Counselors regarding confidentiality. In beginning sessions between the client and the school counselor confidentiality should be discussed and the conditions in which it may have to be breached. According to Lazovsky (2010), The management of student confidentiality has been described as the primary ethical dilemma of school counselors. There are various ethical and legal issues that arise for School Counselors when dealing with confidentiality. School Counselors are required ethically to report when a student engages in clear and imminent danger to themselves or others. Some school counselors base their decision to breach confidentiality on how imminent the danger is that is being presented by the situation. â€Å"Most counselors would agree parents should be informed of drug experimentation by an 8 year old. Many however, would disagree to tell parents that a 16 year old client reported occasional experimentation with marijuana† (Glosoff Pate, 2002). This example shows that school counselors should use discretion when deciding to breach confidentiality. These two minor clients are different but each situation has a variety of ways that it could be handled. According to Lazovky (2008), school counselors are advised to consult with supervisors and colleagues before making decisions based on breaching confidentiality. They should also know their state policies and laws in the school jurisdiction. Another ethical and legal issue that can arise for school counselors counseling minors in relation to confidentiality is the disclosure of student provided information to parents. Privileged communication is apart of confidentiality. Privileged communication allows for clients to ask counselors to keep their communications and records of their counseling sessions confidential. Privilege belongs to the client and the counselor asserts privilege for the client. According to Glosoff (2002), the already complex issue of privileges communication for school counselors is made even more complex by who has the privilege when counseling a minor. Parents of minors rather than minor clients are assumed to control privilege. School Counselors are sometimes subpoenaed for court appearances when the parent’s do not agree on whether the counselors presence is necessary in the testimony or a parental custody dispute may be the heart of the legal proceeding. The ACA and ASCA recognize that school counselors have limits to their ability to protect client confidences. School Counselors must not only be mindful of their ethical duties but cooperate with any laws that that apply to them as well. The Family Education Rights and Privacy Act (FERPA) establishes that parents control the rights of students under the age of 18 (Iyer, McGregor Connor, 2010). This includes any of the student’s records such as grades, awards and date of birth. Decisions about the release of these records are based under exceptions under FERPA and also the parent’s consent. However, most records regarding the student are held in safe places where other school officials do not have or need access. Another law that school counselors must keep in mind is HIPAA. This law was enacted to protect patient’s health information. In relation to school counselors, the student’s medical records are being protected. The issue of confidentiality in Child Study Teams has become an ethical dilemma for many school counselors. The school counselor must decide on what to disclose and what information to inquire about based on each member’s rights and responsibilities. Deciding what to reveal and what to keep confidential can be a hard and difficult task for school counselors. Clinical Therapists face many ethical and legal issues with regards to confidentiality as well. Clinical Therapists are different from School Counselors in their role with minors because the only stakeholder involved with the therapist in most cases is the parent. According to Ellis (2009), minor’s right to confidentiality is an area at times, which ethics and the law are in conflict. One of these ethical dilemmas arises in the area client privilege. In the case of minors, this privilege extends to the parents who act as representatives to their dependent children. Clinical Therapist struggle with maintaining confidentiality for their minor clients especially when the law is on the side of the parents because they have the right to know. Stone Issacs (2003) suggest that in order to deal with ethical issues regarding confidentiality and minors therapists should prepare a written professional services agreement which provides details on the limits and conditions of confidentiality. At this point the parent can be involved in their child’s treatment in various ways. One of the ways that parents can be involved is through periodical family sessions. In the clinical counseling setting, there are often conflicts between duties of confidentiality and the need to share information with parents or other agencies that provide care for a child or adolescent. There can also be ethical conflicts between duties of confidentiality, grounded in respect for patient autonomy, and both statutory and moral obligations to report child abuse, which are grounded in duties of care and protection (Kaplan, 2005). One issue which troubles some clinical therapists is a statutory obligation to report consensual sexual relationships that adolescents are engaged in with adults irrespective of whether they are clinically judged to be abusive, because they can be framed in many child protection statutes or guidance as constituting abuse. (Ellis, 2009). There are some similarities between confidentiality and counseling minors in both school and clinical settings. One similarity is that in both settings counselors must follow the same ethical guidelines for breaching confidentiality. Breaching confidentiality is allowed by ethical codes in special or extreme circumstances (Lazovsky, 2008). In both settings counselors must carefully deliberate over the circumstances that are presented to them by the minor client in the counseling sessions. The counselor should then decide whether or not to breach confidentiality. This ethical dilemma is a difficult issue that many counselors are faced with in both clinical and school settings. Another similarity between counseling minors in both school and clinical settings is that counselors must often consult with other staff members in both settings for the benefit of the children that they serve. It is important for counselors to educate other non-mental health staff members that they must keep confidential any personal information they learn about children as a result of their professional positions (Rehmley Herley, 2010). If any information were to be disclosed outside of the school or clinical settings, it could be lead to grounds for a lawsuit. There are some differences between confidentiality and counseling minors in both school and clinical settings as well. One difference is that counselors in clinical settings encounter fewer ethical issues around confidentiality and minors because parents usually have given legal consent for the counselor to work with the client. However in the school setting, Rehmley Herley (2010) state that the counselor often does not have a legal obligation to obtain parental permission before counseling students unless there is a federal or state statute to the contrary. Another difference between confidentiality and minors in the school and clinical setting is in the clinical setting the counseling process may be limited to the counselor, the minor client and the parents. Most minor clients who are placed in clinical treatment facilities will be unable to make crucial decisions for themselves. The privilege of informed consent will be given to the parent and the parent will operate in the child’s best interests (Glosoff Pate, 2002). Counselors in both clinical and school settings find the ethical and legal issues of confidentiality difficult because there are constant conflicts between the law and ethics. One issue that counselors find causes tension between law and ethics is whether children have the right to enter into a counseling relationship without parental consent. According to Rehmley Herley (2010), every child has a moral right to privacy in the counseling relationship. Kaplan (2005) believes that children should have the same rights to confidentiality as adult clients. However, counselors constantly struggle between the ethical obligation of privacy to their minor clients and their legal obligation to the parents of the same minor clients to keep their child protected and safe. There are some ways that counselors are able to deal with these ethical and legal dilemmas regarding confidentiality and minors. One recommendation that was made by Iyer, Baxter-McGregor Connor (2010) is to develop and maintain a strong informed consent policy. Informed consent is a process that is an ongoing process and should begin before the counseling process begins. According to Glosoff Pate (2002), it is beneficial in both settings to develop a written informed consent policy so that it can be given to parents and anyone else who is involved in the clients counseling process. This is beneficial because all parties involved in the process will know about confidentiality and also what to expect. Another recommendation that was suggested by Iyer, Baxter-McGregor Connor (2010) is to educate all members that are involved in the minor client’s counseling process about the importance of confidentiality. In this way there will be a reduction in the likelihood of difficult situations posed by ethical dilemmas developing in the first place. An explanation of confidentiality would be a great addition to an orientation to parents, teachers or other non-mental health professionals. They would know what to expect with regards to confidentiality in counseling sessions with minors. Another suggestion that was discussed in the literature in relation to ethical and legal dilemmas regarding confidentiality and minors is to send out educational newsletters and emails. This suggestion takes a proactive stance towards the ethical and legal issue of confidentiality and minors and it helps to avoid the possible ethical dilemma before it occurs (Glosoff Pate, 2002). Some possible items that could be included in these newsletters or emails may be a definition of confidentiality, one’s informed consent policy, state regulations or law’s regarding confidentiality and a summary of ASCA’s and ACA’s ethics statements for counselors. Lastly, another suggestion that was discussed in the literature in relation to ethical and legal dilemmas regarding confidentiality and minors is for counselors to develop a strong network of professionals that counselors can confide in and ask advice when they encounter an ethical dilemma (Iyer, Baxter-McGregor Connor 2010; Glosoff Pate, 2002). This network may include school psychologists, local psychologists, counseling professionals and any who works within a similar field. According to Iyer, Baxter-McGregor Connor (2010), a counselor may use a common framework such Kitchener’s five moral principles regarding ethical decision making. The five moral principal’s are autonomy, justice (fairness), beneficence (doing good), non-maleficence (doing no harm) and fidelity (keeping promises). Another ethical decision making model that can be followed is by Forester-Miller and Davis which is to 1) Identify the problem, 2) Apply one’s professional code of ethics, 3) Determine the nature and decisions of the dilemma, 4) Generate potential courses of action, 5) Consider the potential consequences of all options and choose a course of action 6) Evaluate the selected course of action and 7) Implement the course of action. Counselors in both clinical and school setting have a tremendous amount of responsibility to uphold when they are counseling minors. The ethical and legal issues that arise for this group can sometimes differ and also be contradictory to each other. It is the responsibility of the counselors to prepare themselves and all parties involved in the counseling process with the knowledge that is necessary in regards to confidentiality and minors. In many cases when the counselor is left to choose the right course of action in regards to confidentiality, the outcome will inevitable benefit the client.