Yesterday I uploaded a new test build for analysis, now build 118.
Since previous test build many internal parts have been rewritten, such
that the source code is the same where possible for analysis, entry and
epic. Most of this is invisible to the end-user, but assures that e.g. chk
files are read the same way in the systems. A few aspects remain btw. the
products. e.g. naming of integer function int() versus integer().
Of particular interest to the end-user is the following:
Practical:
a.
A new table dialog makes it easier (hopefully) to use the possibilities in
the new table command without using the command prompt. The table dialog
is available from the toolbar or as a command tabdialog.
b.
Graphs can now be made by a group variable. e.g.
box age /by=sex will create two boxes for two values of sex
c
CHK files are now saved with the data. Including definitions of variable
labels and missing values.
d
When looking at a table or graph in output, rightclicking with the mouse
copies that table or graph to clipboard. (for tables sometimes there is a
pointer error, but this will be fixed).
Technical issues:
a. No new functionality will be added before release as public v2.0 except
for translation, which is about half way implemented.
b. Bugs should be removed before releasing.
- some issues are known and will be documented in the Mantis database
c. We should decide what to do about text strings in rec files, which are
found in the qes files, but not read into analysis.
d. Translation for analysis is about half way
e. the Mantis system is not completely updated, but this should take place
within about a week to reflect what was fixed and what remains.
f. There are still some minor estimation problems, such as percentiles not
being correct with very small samples (e.g. n<15).
Regards to all
--
Jens Lauritsen
Coordinator and initiator of EpiData Project
http://www.epidata.dk
You are right Jamie - a response of 1 was given for most imporant. I
will try using your v1=10-v1 and then the method Jens suggests.
thank you very much to both you and Jens for your help
Christine
Christine Roseveare
Analyst
Regional Public Health
570 9194
027 495 9671
>>> <epidata-list(a)lists.umanitoba.ca> 24/08/2007 1:33 p.m. >>>
An important question is whether a response of 1 most important or
least
important?
If 1 is most important, then make the default value 11 and not zero.
I would just do:
describe v1-v25
and scan the medians to find those with the lowest median (if 1 is most
important and default is 11) or highest median (10 is most important
and
default is 0). Items that are infrequently ranked will have a median
equal to the default. Items most frequently ranked will have a median
that is based mostly on ranks.
I haven't thought about the formal statistical analysis here, but
you'll
get a pretty good assessment.
Jens approach works very well when the most important response is 10.
If
most important is 1, then you can also do this:
v1 = 10-v1
v2 = 10- v2
etc
and use aggregate.
jamie
> Christine wrote:
> I have been asked to set up a database for a questionnaire that has
> already been written. I was going to set it up in epidata but my
> problem is how to deal witha question that asks people to rank
> options. The question gives a list of topics and asks the person
> filling in the questionnaire to rank them from 1-10 in order of
> importance.
>
Jens wrote:
> I suggest to do the following:
> For each question (say 25 different themes) enter the priority the
> person gave to that issue.
> setup the qes:
> v1 item1 ..... ##
> v2 item2 ..... ##
> etc
> v25 text for item 25 ##
>
> in chk make the defaultvalue 0:
> before file
> DEFAULTVALUE v1-v25 0
> end
>
> Then in analysis after entering the data: (say you had the 25
variables)
> read response
> aggregate /sum="v1,v2,v3,v4,v5"
> aggregate /sum="v6,v7,v8,v9,v10" etc
>
> * the tables will show the sum of the variables, that is the overall
> ranking sum.
>
--
The information contained in this email and
any attachments is confidential and may be
legally privileged. If you have received this
message in error, please notify the sender
immediately and remove all copies of the message,
including any attachments. Any views or
opinions expressed in this email (unless
otherwise stated) may not represent those of
Hutt Valley DHB.
Thank you.
** Disclaimer added by HVDHB **
I will be out of the office starting 08/24/2007 and will not return until
08/28/2007.
If you require immediate assistance, please contact Stephen Drew at
extension 5330.
Christine wrote:
I have been asked to set up a database for a questionnaire that has
already been written. I was going to set it up in epidata but my problem
is how to deal witha question that asks people to rank options. The
question gives a list of topics and asks the person filling in the
questionnaire to rank them from 1-10 in order of importance.
I suggest to do the following:
For each question (say 25 different themes) enter the priority the
person gave to that issue.
setup the qes:
v1 item1 ..... ##
v2 item2 ..... ##
etc
v25 text for item 25 ##
in chk make the defaultvalue 0:
before file
DEFAULTVALUE v1-v25 0
end
Then in analysis after entering the data: (say you had the 25 variables)
read response
aggregate /sum="v1,v2,v3,v4,v5"
aggregate /sum="v6,v7,v8,v9,v10"
etc
* the tables will show the sum of the variables, that is the overall
ranking sum.
example
read bromar // example data set for epidata
aggregate /sum="dectime,km,age"
which gives the output:
N Ndectime SUMdecti1 Nkm SUMkm Nage SUMage
4027 3556 13913.85 3620 381084.00 3786 153740
telling us: 4027 observations in all of these 3556 contributed to
dectime, 3620 to km
and 3786 to age variables. Sum of the three being 13913.85, 381084.00
and 153740
Obviously here the sum makes no sense, but in your case the sum of the
ranking would reflect all of the variables and the responses.
regards
Jens Lauritsen
EpiData Association
ps. I tried to locate the user sending spam to the list today/yesterday
and may effectively have deleted this person from the list (if not we
will see more of the same spam) - terribly annoying.
Bonjour,
Je suis actuellement absent du bureau.Je serai de retour le 3 septembre 2007
Je répondrai à cette date à votre message
A bientôt
Hello,
I am currently out of the office and will be back on September the 3rd, 2007
I will reply to your message then.
All the best
Robert
> -----Ursprüngliche Nachricht-----
> Von: "B. Wiedemann" <x-72(a)gmx.de>
> Gesendet: 23.08.07 09:24:18
> An: christopher.bohn2(a)travedsl.de, ilsa.ramm(a)gmx.de, b-loewe(a)web.de, m.siebelts(a)email.de, lottaleben(a)atomstromfrei.de
> Betreff: Fwd: Fw: Fw: Doorst.: FW: Read & Respond
-------- Original-Nachricht --------
> Datum: Thu, 16 Aug 2007 07:49:53 +0200
> Von: "Jochen Fölsterprivat" <jochen.foelster(a)foefi.de>
> An: "Yvonne Moutoux" <Moutoux(a)gmx.de>, "tina johanssen" <tina_johanssen(a)yahoo.de>, "michelle" <michelle.sachs(a)gmx.net>, "Beate x-72" <x-72(a)gmx.de>, "Aljonna" <aljonna(a)foefi.de>, "aareinhild" <reinhild.fragel(a)foefi.de>
> Betreff: Fw: Fw: Doorst.: FW: Read & Respond
> ----- Original Message ----- From: <Grunigrunewald(a)aol.com>
> To: <airoff2(a)ham.airport.de>; <Almuth.Hanke(a)gmx.de>; <McClouds(a)Collie.de>;
> <chengizkaan2004(a)hotmail.com>; <Doris.Steiner(a)sparkasse-stormarn.de>;
> <jochen.foelster(a)foefi.de>; <Katfoel(a)aol.com>; <knebel(a)esenet.dk>;
> <mail(a)jenskohnert.de>; <koschoe(a)de.ibm.com>
> Sent: Wednesday, August 15, 2007 10:51 PM
> Subject: Fwd: Fw: Doorst.: FW: Read & Respond
> >
> >
> >
> >
> >
> >
> -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
> Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
>
> -----------------------------------------------------------------
>
>
>
> ----- Original Message -----
> From: esther
>
> To: Dirk Huizinga
>
> Cc: frank ; ilonka ; Jacquie Smilde ; karolina bonefaas ; Peter ; Sandra Koegler-Lieshout ; W.G. Schönhage
>
> Sent: Wednesday, August 15, 2007 12:42 PM
>
> Subject: Doorst.: FW: Read & Respond
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Read this...I mean REALLY read this
>
>
>
>
>
>
>
>
>
> This is without a doubt one of the nicest good luck forwards I have received. Hope it works for you -- and me!
>
>
> You have 6 minutes
>
>
> There's some mighty fine advice in these words, even if you're not superstitious. This has been sent To you for good luck from the Anthony Robbins organization. It has been sent around the world ten times so Far.
>
>
> Do not keep this message.
>
>
> This must leave your hands in 6 MINUTES. Otherwise you will get a very unpleasant surprise. This is true, even if you are not superstitious, agnostic, or otherwise faith impaired.
>
>
>
>
>
>
> ONE.
>
>
> Give people more than they expect and do it cheerfully.
>
>
> TWO
>
>
> . Marry a man/woman you love to talk to. As you get older, their conversational skills will be as important as any other.
>
>
> THREE.
>
>
> Don't believe all you hear, spend all you have or sleep all you want.
>
>
> FOUR
>
>
> . When you say, "I love you ," mean it.
>
>
> FIVE.
>
>
> When you say, "I'm sorry," look the person in the eye.
>
>
> SIX
>
>
> . Be engaged at least six months before you get married.
>
>
> SEVEN.
>
>
> Believe in love at first sight.
>
>
> EIGHT.
>
>
> Never laugh at anyone's dream. People who don't have dreams don't have much.
>
>
> NINE
>
>
> . Love deeply and passionately. You might get hurt but it's the only way to live life completely.
>
>
> TEN.
>
>
> . In disagreements, fight fairly. No name calling.
>
>
> ELEVEN
>
>
> . Don't judge people by their relatives.
>
>
> TWELVE.
>
>
> Talk slowly but think quickly.
>
>
> THIRTEEN
>
>
> . When someone asks you a question you don't want to answer, smile and ask, "Why do you want to know?"
>
>
> FOURTEEN
>
>
> . Remember that great love and great achievements involve great risk.
>
>
>
> FIFTEEN.
>
>
> Say "bless you" when you hear someone sneeze.
>
>
>
> SIXTEEN
>
>
> . When you lose, don't lose the lesson !
>
>
> SEVENTEEN
>
>
> . Remember the three R's: Respect for self; Respect for others; and responsibility for all your actions.
>
>
> EIGHTEEN.
>
>
> Don't let a little dispute injure a great friendship.
>
>
> NINETEEN.
>
>
> When you realize you've made a mistake, take immediate steps to correct it.
>
>
> TWENTY.
>
>
> Smile when picking up the phone. The caller will hear it in your voice
>
>
>
> TWENTY-ONE.
>
>
> Spend some time alone.
>
>
>
> Now, here's the FUN part!
>
>
> Send this to at least 5 people and your life will improve. 1-4 people: Your life will improve slightly.
>
>
> 5-9 people: Your life will improve to your liking.
>
>
> 9-14 people: You will have at least 5 surprises in the next 3 weeks
>
>
> 15 and above: Your life will improve drastically and everything you ever dreamed of will begin to take shape.
>
>
> A true friend is someone who reaches for your hand and touches your heart. Do not keep this message
>
>
>
>
>
> *********************************************************************************************************
>
>
>
>
>
>
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.476 / Virus Database: 269.11.11/944 - Release Date: 9/08/2007 2:44 p.m.
>
>
> -----------------------------------------------------------------
> Invite your mail contacts to join your friends list with Windows Live Spaces. It's easy! Try it!
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Hello everyone
I have been asked to set up a database for a questionnaire that has
already been written.
I was going to set it up in epidata but my problem is how to deal with
a question that asks people to rank options.
The question gives a list of topics and asks the person filling in the
questionnaire to rank them from 1-10 in order of importance. (It is for
a city council and they want to know what young people are most
concernred about so the choices are crime, employment, environment etc).
I thought in the qes I could just make a variable for each topic and
enter the number the respondent puts beside it, but I don't know how to
then analyse the answers, so I can find out what the top 10 issues are
overall.
If anyone has done this before and can offer a suggestion I would be
grateful
thanks
Christine
Christine Roseveare
Analyst
Regional Public Health
570 9194
027 495 9671
--
The information contained in this email and
any attachments is confidential and may be
legally privileged. If you have received this
message in error, please notify the sender
immediately and remove all copies of the message,
including any attachments. Any views or
opinions expressed in this email (unless
otherwise stated) may not represent those of
Hutt Valley DHB.
Thank you.
** Disclaimer added by HVDHB **
Just add "IMIF" in each select group: (here shown with one group):
TYPE "\LTABLE SHOWING ILLNESS IN THE SES CLASS 2"
SELECT SES=2
count /q
* in the next line we use a resultvariable called $count:
* resultvariables are shown with the command "result"
imif $count > 0 then
TABLES SEX ILLNESS
else
type "No observations"
endif
Select
Regards
Jens Lauritsen
EpiData Association
Dear All,
I am bringing to your kind attention one more and last problem. Hopefully this will be looked into before the release of ver 2 of Epidata Analysis. I have a sample database of distribution of illness according to the socio- economic status in the population. The socio- economic status is divided into 1,2 and 3 classes. The sample SES.pgm is listed below..
*SES.PGM
CLS
CLOSE
SET ECHO=OFF
READ "D:\EPIDATA\SES\SES.REC"
TYPE "\LTABLE SHOWING ILLNESS IN THE SES CLASS 1"
SELECT SES=1
TABLES SEX ILLNESS
SELECT
TYPE "\LTABLE SHOWING ILLNESS IN THE SES CLASS 2"
SELECT SES=2
TABLES SEX ILLNESS
SELECT
TYPE "\LTABLE SHOWING ILLNESS IN THE SES CLASS 3"
SELECT SES=3
TABLES SEX ILLNESS
SELECT
*END OF PGM
When I run the PGM, following output is generated.
TABLES SHOWING ILLNESS IN THE SES CLASS 1
Sex
Illness F M Total
N 2 1 3
Y 0 3 3
Total 2 4 6
TABLES SHOWING ILLNESS IN THE SES CLASS 2
No data!
Operation aborted
Since there is no data in the socio-economic class 2, the PGM aborts the operation. The rest of the PGM is not processed. I am attaching a sample SES.ZIP containing the necessary qes, chk and rec files. The latest test ver 2 (build 112b) of Epi Analysis was used.
If this feature is looked into, large PGMs can be run without sny break. Is there any solution of this problem?
Dr. Shavinder Singh
epidata-list-request(a)lists.umanitoba.ca wrote:
> Often, one uses questionnaires where the items and a value labels might
> be identical for dozens or more questions. It seems that EpiData creates
> a block in the .CHK file that applies to one variable only. Is there a
> way to apply the same checks and labels to several variables at the same
> time?
>
Technique 1: Common label definitions in EpiData.LBL.
When you start EpiData Entry there is a default label collection. If you
often use the same labels then edit the file EpiData.lbl which you will
find in the same folder as the EpiData Enter system (epidata.exe)
installed.
Then for a given new file create the rec file and in the Enter system
use "Add/revise" to add checks to the file.
a Go to the first of the variables which should have a given label
b. Choose from the list of your predefined labels contained in
"Epidata.lbl" and read into the definition box.
c. Now switch back to the field in question (F6 goes from the definition
box to the field and vice versa).
d. Press Ctrl+C
e. Move the cursor to each of the other fields and press Ctrl+V and the
definition is copied. It takes very short time.
If you are thing instead of missing values etc. look in the help file:
MISSINGVALUE
Missing value has three syntaxes:
MISSINGVALUE x [y [z]] which can be used in fieldblocks, and
MISSINGVALUE ALL x [y [z]] which can be used in BEFORE FILE blocks, and
MISSINGVALUE field1-field5, field6 …. x [y [z]] which can be used in
BEFORE FILE blocks
You see that in this case you can use syntax similar to Stata x-y . But
for labels the trick above is the only possible.
Technique 2: automatic revise of a larger file for subsets:
The other technique to use is to create a common (large) file with all
relevant questions, properly labelled. Then for a given substudy do:
1. create copies of the common large rec file and associated chk file
2. create copy of qes file
3. edit the qes file and delete sections not currently needed.
4. choose the rec file for entering data. Enter will "see" that the qes
file has a more recent date and time and will suggest to adapt the
structure to the qes file. Say yes to this and you will see that a
number of no longer used variables will be removed including the chk for
those.
In terms of the analysis programme the issue is somewhat different, but
I read the question as being relevant for entry.
regards
Jens Lauritsen
EpiData Association