Methodology Report.

Redesign of Survey of Science and Engineering Research Facilities: 2003

Appendix A: Cognitive Interviews Report

Introduction top.

Westat and National Science Foundation (NSF) staff visited 44 academic and biomedical institutions and conducted telephone interviews with 2 additional institutions during the course of the survey redesign effort. The institutions represented the diverse nature of research facilities nationwide in terms of size (based on research and development expenditures), geographic location, breadth of research focus (e.g., biomedical versus multidisciplined), and other characteristics [e.g., historically black colleges and universities (HBCUs)]. Information on the methods used to select and recruit institutions, along with a description of the interview strategy, is featured in the first section of the main report.

The interviews had three major objectives. First, NSF sought to determine whether the draft questionnaire captured the important aspects of the institutions' capacity to conduct research. Second, site visit participants were asked to discuss the sources of data and response strategies they would use to complete the survey questions; this information contributed to an understanding of issues that might affect data quality. Finally, the interviews offered an opportunity to learn about the availability of the data requested on the survey and the level of burden associated with various survey topics.

Using the 1999 Survey of Scientific and Engineering Research Facilities (PDF 138K) as a starting point, along with the draft 2003 questionnaire (PDF 396K) as it evolved throughout the redesign effort, respondents were asked to:

In addition to the topics addressed in the 1999 survey, respondents were asked to consider the ways in which computing and networking capacity affected research capacity at their institutions and to evaluate questions that had been drafted to address this relationship.

Respondents were asked to discuss different portions of the questionnaire at different points in the redesign study. For example, some respondents focused on topics addressed in the 1999 survey, either by considering revised question wording or new question-asking strategies (e.g., individual project sheets). Other respondents were asked to discuss only questions developed to address computing and networking capacity.

The most important characteristics of the group of institutions participating in the redesign effort were each institution's unique organizational structure, methods of documenting and planning research space allocations, and computing setup. Every site visit provided valuable information on how the questionnaire was interpreted and suggested potential improvements to the survey instructions or question wording. The site visits were conducted on an iterative basis, with changes suggested during one set of site visits tested during subsequent visits. After each set of site visits, the merits of the revisions were discussed and decisions were made as to whether the revisions should be discarded in favor of the original wording, tested further, or replaced with another version.

It should be noted that revisions to the survey instructions and questions were also suggested by other sources. For example, the expert panel and the participants in the methodology workshop offered advice on additional topics to be addressed in the survey. When possible, survey text was drafted in response to these suggestions and included in the next round of testing.

This appendix provides the key findings from the site visits. Each section begins with a summary of the instructions, definitions, and questions from the 1999 survey or the first draft of new questions, followed by a discussion of respondents' reactions and a description of the revisions made as a result of testing.

Redesign of 1999 Survey Sections top.

This section focuses on the core elements from the 1999 survey, including:

Existing S&E Research Space top.

The 1999 survey asked respondents to report the amount of NASF by type (e.g., instructional and research) and by S&E field. Respondents were also asked to indicate whether any of the NASF was leased space, by field, and to provide that total. To guide respondents, the survey included definitions of such terms as research, research facilities, S&E fields, and NASF. The 1999 survey also included instructions for calculating NASF when it was shared across fields or when it was used for more than one purpose.

Tracking space. When presented with these questions, site visit participants reported widespread use of electronic databases to store information on institutional facilities. Roughly three-fourths of the first 20 institutions visited maintain space allocation data electronically and nearly all of these institutions use Higher Education General Information Survey codes to indicate types of space. Although electronic tracking of space was common, there was wide variation in data categorization. For example, some institutions tracked space by department rather than by field. Some of the smaller institutions did not maintain information that would allow them to easily differentiate between space used for sponsored research and space used for other purposes.

During the redesign effort, NSF and Westat, with help from the expert panel, tried to narrow the focus of this survey topic to issues more closely related to the needs of Facilities survey data users. Consequently, questions on instructional space were eliminated from the survey and an attempt was made to collect more detailed information on the NASF allocated to types of research space. Initial revisions of the questionnaire tried to gather separate NASF data for wet labs, dry labs, laboratory support space, offices (to the extent they are used for research), and other space. Although many respondents indicated that their institutions do not maintain data that differentiate between wet and dry lab research space, most (11 of 14 institutions) indicated that their records would allow differentiation of space by lab, laboratory support, offices, and other space. Consequently, these categories were used in the final version of the questionnaire.

Some institutions reported problems in using their data to assemble and report NASF as requested on the survey. For example:

The extent to which specific changes to the questionnaire can overcome all of the respondents' difficulties is limited, given the wide variety of data maintained and readily available. Therefore, the clarity with which the survey instrument conveys the intent and parameters of the question (e.g., focus on budgeted and funded research) will be critical to the success of the survey. Fortunately, the majority of respondents indicated that they did understand the goal of the question, even though several would find it difficult to provide the data.

Types of space. Respondents were sometimes unsure of how they should treat various types of space. For example, several respondents asked whether library space (departmental and project-specific), hallways (especially within suites devoted to research), conference rooms, and elevators (used solely to transport hazardous materials) should be included in the NASF they report. Often, efforts to solve these problems centered on the addition of examples or instructions. For example, on the list of spaces to include as research space:

Field categories. Respondents pointed out difficulties associated with assigning space to the fields listed in the questionnaire. Although respondents were provided with the cross-reference between NSF field categories and the National Center for Education Statistics (NCES) classification of academic departments, it did not always prevent confusion. For example, some respondents chose not to look at the cross-reference when formulating their answers. Others found the format of the cross-reference difficult to understand.

The multidisciplinary nature of some of the research performed at these institutions was perhaps the cause of the greatest share of confusion for respondents as they tried to use the list of fields. For example, some respondents indicated that the departmental structure of their universities did not differentiate between computer sciences, engineering, and applied mathematics; others did not differentiate between computational chemistry, computational biology, and computational biochemistry.

Revisions to the survey instrument focused on helping respondents identify and define fields and assign their areas of research to the questionnaire typology. One example is the addition of a printed instruction immediately above the list of fields referring the respondent to the cross-reference; this instruction appears the first time the list is presented. Changes were also made to the format of the cross-reference to make it more readable and attractive.

Adequacy of Research Space top.

NSF has used the Facilities survey to gather information on the adequacy of the amount of research space by field. The 1999 question used a dichotomous response structure to describe adequacy (i.e., adequate or inadequate). When space was described as inadequate, respondents were then asked to report the additional amount of space needed. The survey defined space as adequate if there was a "sufficient amount of space to support all the needs of your current S&E program commitments in the field," and space was defined as inadequate "if it was insufficient to meet current commitments or was nonexistent, but needed." Research program commitments were also defined in the questionnaire and respondents were reminded of the definition of research space. During the redesign effort, a new version of the question was developed using a four-point response scale to describe the need for additional space (i.e., adequacy): great need, moderate need, some need, and little or no need. The four-point scale was tested to see if it could provide a better measure of adequacy.

Findings from interviews using both the original and revised items indicated that neither version reflected the complexities involved in determining adequacy. Several respondents indicated that the need for space is driven more by anticipated growth than by current commitments. For example, one respondent stated that the university invests in new or renovated space only after a new grant has been received; current space would be adequate as long as the institution does not grow. Respondents from another institution said that they were not allowed to hire new staff unless space is available for them; again, space is adequate as long as the institution does not grow. Yet another university said that in most areas, they are not pursuing grant opportunities because of limited space. Still, that institution would answer that they had little or no need for additional space because they are meeting current commitments. Therefore, measuring adequacy in terms of current commitments would not produce data that fully describe existing conditions.

Respondents also criticized the subjective nature of the question. One respondent described the revised question as "loaded. . . it depends on who you ask and why you ask them." He said that managers would respond quite differently than researchers. A respondent from another institution asked, "Who would say that they had little or no need?" Another person predicted that survey respondents would alter their answers depending on what they wanted (e.g., more space) or the image they wanted to project.

Some site visit participants believed that different questioning strategies could improve the objectivity of the data. One recommended defining adequacy in terms of barriers to institutional growth caused by lack of space. Another suggested measuring the extent to which fields will be limited by space constraints over a 3-year period. A third proposed adding text to the response category labels (e.g., great need, moderate need, etc.) that would associate a percentage increase in square footage (above the current square footage) with category. For example, the need to increase square footage by 30 percent or more would be identified on the questionnaire as a "great need."

Based on these suggestions, a third version of the question addressing space adequacy was developed and tested. This version asked respondents to describe "your institution's needs for additional research space to meet your research program commitments over the next 3 years for each field." Response options included none, less than 2,000 square feet, 2,000-4,999 square feet, 5,000-9,999 square feet, 10,000-19,999 square feet, and 20,000 square feet or more.

Even though the new version defined an explicit reference period (i.e., 3 years) and used concrete response options, respondents still found the question subjective and ambiguous. Respondents who thought they could accurately answer the question equated it to later questions on the survey that asked about planned repair/renovation and construction projects. Respondents who did not think they could answer the question asked how speculative they should be in anticipating needs for 3 years and some indicated that the question would elicit responses that reflected "dreams" rather than plans or needs.

In summary, asking about adequacy does not seem feasible. Multiple approaches were tried, and deficiencies were found with each. The version that seemed to come closest to meeting the question objectives was seen as equivalent to other items on the survey (i.e., institutions' plans for the future). In light of these findings, the item on adequacy was dropped from the final version of the survey.

Current Research Space Condition top.

The 1999 survey asked institutions to describe the condition of research space assigned to each field. Specifically, respondents were asked to report the percentage of NASF within four categories: suitable for most scientifically competitive research, effective for most levels of research, requires major repair or renovation, and requires replacement. To indicate that the percentages in each category should sum to 100 percent within the field, "100 percent" appeared at the end of each row (i.e., each field). If any research space in a field required replacement, respondents were asked to record the NASF that was funded and scheduled for replacement during the following 2 fiscal years.

Respondents were provided with two definitions to use when answering these items:

During the initial site visits, several respondents reported that they would base their answers to this question on the age of the space; thus, they suggested that the question focus explicitly on age. The expert panel also suggested the use of net age based on the date of the most recent major renovation. In response, a new version of the question was developed that asked institutions to report the percentage of their space that fell into each of four age categories: 2 years or less, 3-10 years, 11-20 years, and over 20 years. An instruction was added to the question directing respondents to consider the completion date for any major renovation "resulting in a facility that is nearly equivalent to a new facility for research purposes" when reporting age.

This version posed various problems for respondents. For example:

Faced with these issues, using age as a measure of condition was discontinued and attempts were made to improve the clarity and reduce the subjectivity of the original item. Once again, subjective categories were presented to respondents, but expanded definitions were provided for each category. A three-point scale was used in the new version: satisfactory (suitable for continued use with normal maintenance), needs remodeling (should renovate space within the next 3 years), and terminate (should replace, demolish, sell, or otherwise terminate use of space). Based on results from subsequent interviews using the revised question, the focus on the condition in future years appeared to improve data quality.

Continued testing with the subjective response categories led to additional refinements. The final version comes very close to the original question, using a four-point response scale:

This version also includes an instruction to focus on research space rather than on equipment.

Costs of Recent and Current Capital Projects top.

One section of the 1999 survey asked about the costs of repair/renovation projects and the costs of new construction projects. Respondents were instructed to consider projects that began during the current or previous fiscal year and to include costs associated with planning, site preparation, repair/renovation, fixed equipment, nonfixed equipment costing $1 million or more, and building infrastructure (e.g., plumbing, lighting, air exchange, and safety systems). In addition, respondents were provided with definitions of new construction, fixed equipment, and nonfixed equipment, and were reminded to prorate costs across fields.

Comments from expert panel members and site visit findings suggested various revisions to the original survey questions. These changes are discussed in the following sections.

Completion costs and NASF associated with repair/renovation projects and new construction projects. The 1999 Facilities survey included questions about repair/renovation and new construction projects with completion costs greater than $5,000. One question asked for the total completion cost of all projects begun during the current and previous fiscal years with prorated costs of more than $5,000 and less than or equal to $100,000. Another question asked respondents to report both the completion costs and the NASF associated with repair/renovation projects costing more than $100,000; this information was requested by field. Finally, for biological and medical sciences, the survey collected completion costs and NASF for projects costing more than $500,000.

The same series of questions that asked about repair/renovation projects in the 1999 survey also asked for information on new construction projects, focusing on projects with estimated completion costs greater than $100,000. A followup question asked whether any new construction project included a single building with a total project cost of at least $25 million.

Discussions with site visit participants identified several aspects of these questions that could prove problematic for survey respondents. These findings contributed to the decision to implement the following revisions.

The thresholds for both repair/renovation projects and construction projects have been increased to $250,000 per field. When the Facilities survey was first implemented, a $100,000 threshold was used to identify repair/renovation and construction projects that might significantly affect research capacity; however, inflation has weakened the impact of this threshold: the majority of site visit participants who expressed an opinion felt that the $100,000 threshold did not reflect significant repair/renovation or construction projects. They added that their institutions did not track capital projects unless they reached a $250,000 threshold. Furthermore, based on U.S. Census Bureau statistics on construction costs, expenditures of $100,000 in 1986 would require approximately $162,000 in 2002 dollars.

An instruction has been added, directing respondents how to apply the $250,000 threshold to prorated projects. During early interviews, respondents were unsure if they should apply the threshold to the overall project cost or to each prorated portion of the costs. The new instruction indicates that the threshold applies to each portion.

Information on the additional NASF resulting from repair/renovation projects will no longer be collected by field. Respondents indicated that institutions typically maintain cost and NASF data by project, not by department or field. (However, most respondents said they could manipulate their data to provide answers to the survey questions on repair/renovation project costs by field.)

A definition of the term "start date" has been added, instructing respondents to focus on the beginning of physical repair/renovation or new construction. Some respondents were confused by the start date specified in the original question instructions. They were unsure whether the date should be defined by the start of planning and design activities or by the start of physical repair/renovation or new construction.

An instruction has been added requiring respondents to provide estimated costs for projects not completed at the time the survey is conducted. During several site visits, respondents indicated that they could only provide estimates of the costs of ongoing projects. This was true especially in the case of projects that were begun during the reference period but scheduled to continue for several years into the future.

Nonfixed equipment. The 1999 Facilities survey included questions about nonfixed equipment associated with the repair/renovation projects or new construction projects reported on the survey. Specifically, respondents were asked to record the name and cost of each piece of equipment valued in excess of $1 million. These questions were designed to help provide data that had previously been gathered through the NSF-sponsored Instrumentation survey.

Members of the expert panel were asked to comment on whether it was advisable to include questions about nonfixed equipment on the Facilities survey. Although the members recognized the reasons for addressing the topic in the survey (e.g., a dearth of existing data on instrumentation), they also identified several arguments against its inclusion. One of the strongest arguments was that as instrumentation costs have risen, there has been an increasing trend toward shared facilities and shared equipment. This has been true both across institutions and across disciplines within institutions. Panel members felt that this phenomenon may be weakening the link between institutional facilities and major instrumentation.

Other arguments against asking about instrumentation on the Facilities survey included the following:

Based on these arguments, the decision was made to drop the topic from the survey and investigate other means of collecting data on major instrumentation.

Project-level data. The 1999 Facilities survey asked respondents to indicate if any new construction project for S&E facilities included a single building with a total project cost of at least $25 million. Institutions that responded affirmatively were sent the 1999 Large Facilities Follow-up Survey and asked to provide additional information about each building, including:

Discussions with participants during the site visits helped identify two shortcomings of the 1999 data collection strategy. First, it focused on buildings rather than projects; in contrast, respondents indicated that their institutions tend to track new construction costs and NASF on a project-by-project basis. Second, the strategy placed added burden on institutions through the implementation of a second survey. In an effort to overcome these shortcomings, attempts were made to include questions from the Followup survey on the Facilities survey. In addition, revised questions were developed that focused on project-level data rather than on building-level data.

The new data collection strategy relies on the addition of a project sheet within the survey. The initial draft of the project sheet asked respondents to report the same information that had been included in the Followup survey, but on a project-by-project basis. However, a significant difference was the change from a $25 million threshold to a $100,000 threshold.[8]

Limited testing of these questions with respondents from three institutions helped refine the project sheet focus. Respondents felt they would be able to answer the question on completion costs and gross square feet of individual projects both overall and by field. They were less confident in their ability to answer other questions on the project sheet. Specific concerns included the following:

In summary, although data manipulation will be required, these findings imply that respondents can report overall estimated completion costs and NASF as well as by project, for research, and by field. However, the question on features of the new construction was dropped from the final version of the survey given the uncertainties about data quality and the burden associated with providing data for every project with completion costs of $250,000 or more.

Funding sources. The last set of questions about repair/renovation projects and new construction projects in the 1999 survey focused on funding sources for the projects. Respondents were asked to report the dollar amount provided by several sources for projects costing more than $100,000. The funding sources listed in the questionnaire included the Federal Government, state or local government, private donations, institutional funds, tax-exempt bonds, other debt financing, and other sources. Institutional funds were defined as funding from "the institution's operating funds, endowments, indirect costs recovered from Federal grants and/or contracts, indirect costs recovered from other sources, etc." In addition, respondents were asked to report the amount of indirect costs recovered from Federal grants or contracts, if possible.

Questions on funding sources were discussed with participants during the first two sets of site visits. Respondents did not report any problems in providing data on Federal or state government funding, but some had difficulty distinguishing among donations, institutional funds, and debt funding. To solve this, the categories included in the question were collapsed into three: Federal Government, state government, and institutional funds and other sources. Examples of this last category are provided with the question.

When asked to discuss the question on indirect cost recovery, few members of the expert panel said their institutions could identify the amount of indirect costs recovered from Federal grants and contracts. One member said that both indirect costs and recovered overhead lose their identity as they come into a university and are combined with other items. The panel recommended that the indirect cost question should be dropped. However, these data, particularly the extent to which indirect cost reimbursements are being used to fund new construction projects, are of great interest to the Office of Management and Budget (OMB). Therefore, the question was retained in the new survey.

Planned Repair/Renovation and Construction Projects top.

The next section of the 1999 survey focused on repair/renovation and construction projects planned to begin during the next 2 fiscal years. A project was considered planned only if it had been funded and work had been scheduled. Respondents were asked to report the estimated costs and NASF for planned projects with an estimated completion cost greater than $100,000 by field. They were instructed to sum the estimated costs and NASF across S&E fields.

Respondents were also asked to report the costs and NASF associated with changes to the central campus infrastructure, defined as "systems that exist between the buildings of a campus (excluding the area within 5 feet of any individual building foundation) and the nonarchitectural elements of campus design (central wiring for telecommunications systems, storage or disposal facilities, electrical wiring between buildings, central heating and air exchange systems, drains and sewers, roadways, walkways, parking systems, etc.)."

Because the time available to examine issues raised during each site visit was limited, changes to these items were based largely on the changes made to the questions on current repair/renovation and construction projects. The revised questions were discussed during six site visits and no significant problems were detected.

Deferred Repair/Renovation and Construction Projects top.

The 1999 Facilities survey also addressed deferred repair/renovation and construction projects. A project was to be considered deferred if it met four criteria:

Respondents were asked to differentiate between projects that were included in the institutional plan and those that were not. They were instructed to report the costs associated with each deferred project and to sum the costs across fields, and were asked to report costs of deferred projects associated with the central campus infrastructure.

Again, because of time constraints, the topics of deferred repair/renovation and construction projects were not addressed during the site visits. Changes to these items were based largely on the changes made to the questions on current repair/renovation and construction projects. However, discussions of draft questions on the adequacy of research space provided some insights into methods survey respondents might use to answer questions on deferred projects. Several site visit participants indicated that their institutional capital plans would be a useful source of information. The plans typically address current and future institutional needs for the next 3 to 5 years.

Although institutional capital plans might be the best source of information, some respondents cautioned that the plans might not provide a fully accurate picture of deferred needs. Participants in one site visit said that their plans identify institutional priorities rather than actual responses to deferred needs. Another institution warned that their plans were more a reflection of appropriation constraints than of deferred needs.

Animal Research Space top.

The 1999 survey included several questions on animal research facilities; these questions were contained in a separate section of the questionnaire but mainly covered the same topics addressed in other sections. Respondents were also asked about the existence of specially adaptive animal research facilities at the institution, including facilities for mouse-induced mutants, barrier facilities, fish research facilities, and facilities for infected research animals that were rated at biosafety levels 2 or 3.

At the beginning of the animal research space section, the questionnaire specified the types of space that should be included and gave examples of areas that should be excluded. For example, respondents were instructed to include all "general animal housing" areas (e.g., cage rooms, stalls, and isolation rooms), maintenance areas for animal research (e.g., feed storage rooms and cage-washing rooms), and animal laboratories (e.g., bench space, animal production colonies, holding rooms, and surgical facilities). Respondents were also instructed to exclude agricultural field buildings sheltering animals that do not directly support research or are not subject to government regulations concerning their care and treatment areas for veterinary patients.

The strategy of asking about animal research space separately from other research space was believed to cause confusion among respondents (i.e., respondents might be unsure whether their reports of research space by field should include or exclude departmental animal research space). In fact, early testing confirmed that some respondents were uncertain how they should report departmental animal research space.

To clarify, questions about animal research (which had been included in a separate section of the 1999 survey) were disaggregated and inserted into the various sections of the revised questionnaire. For example, the question on current NASF asked for NASF by field and for animal research.

This new strategy proved largely successful during the site visits. Respondents seemed to better understand the types of space that should be considered. The only issue that remained problematic was that the systems used to track space at several institutions do not designate departmental animal research space as animal research space. For example, at some institutions, this type of space would be classified as laboratory support space. However, it did not appear that any change to the survey questionnaire could eliminate this problem.

Development of the Computing and Networking Capacity Section top.

Computing and networking capacity has emerged as a significant factor influencing institutional research capacity. Whereas space (i.e., square footage) has traditionally been used as a measure of an institution's ability to perform research, new computer and networking technologies now seem to contribute significantly as well. Recognizing this relationship, efforts were made to identify the key elements of institutional computing and networking capacity used to support research endeavors. Subsequent to these efforts, survey items related to the elements were developed and tested. Finally, the questions were added as a new section of the Facilities survey.

The first step was to hold discussions with expert panel members, which resulted in an initial set of research topics for consideration: expenditures for high performance computing and networking technology, bandwidth capacity, and access to Internet2. In addition, the panel suggested that NSF conduct discussions with information technology (IT) experts and with institutional representatives familiar with IT issues.

To further investigate the main components of computing and networking capacity that might influence research capacity, site visit participants were asked to react to draft questions related to the three topics that the expert panel members and some of NSF's internal consultants suggested. These participants, typically IT staff from various institutions, also were asked to reflect on how the relationship between research and computing and networking capacity could best be measured. Two general themes emerged from these discussions:

These themes influenced both the final list of topics included in the new survey section and many survey questions' wording.

Through additional site visits and further discussions with technology experts, six broad topics on which institutions could reliably report were identified as key to understanding computing and networking capacity:

Two other topics, access to remote instrumentation and curated databases, were investigated because they were considered to contribute to research endeavors. However, respondents indicated that the data needed to answer questions on these topics were not available at the institutional level and that gathering the information from individuals or departments throughout the institution would significantly increase the burden associated with survey participation.

Efforts to develop survey items related to the various computing and networking capacity topics discussed above are described in the following sections.

Expenditures for Computing and Networking top.

Among the first questions tested were those asking respondents to report institutional expenditures for "information and network technologies" and for "advanced or high performance networking technologies and applications" during the current fiscal year. Expert panel members agreed that since these items were quantifiable and could be standardized, they would be useful indicators of institutional investment in capacity. However, they expressed some doubt that the data would be reliable.

Initial testing confirmed the panel members' suspicions. Two general problems arose:

In addition, respondents did not fully understand the focus of the second question (which addressed advanced or high performance networking technologies and applications) and they did not distinguish between the two questions. To solve this problem, examples of technologies were added to the second question and additional testing was conducted.

Although respondents better understood the question intent with the addition of the definitions, those who examined the new question still expressed concern about their ability to answer it given the highly decentralized nature of technology expenditures. Because these data are often not available to respondents, these items were deleted from the final version of the questionnaire.

Connection Speed top.

One goal of the new computing and networking capacity section is to identify characteristics of the institutional network that facilitate or impede research. Researchers' increased ability to exchange information and efficiently access resources (both within and outside their own institutions) show how computing and networking capacity can positively affect research capacity.

The speed at which computer networks exchange or process information was identified as a key factor in facilitating research activities. Several topics related to network speed were discussed with expert panel members, site visit respondents, and participants in a special workshop devoted to computing and networking issues. At the expert panel members' suggestion, a question on institutional bandwidth was developed and tested during three early site visits. Respondents from two of these institutions reported that different types of users have access to different measures of bandwidth. For example, students might have access to 10mb, whereas most faculty have access to 155mb.

Conversations with IT experts contributed to a better conceptual understanding of networking. The ability of researchers at any institution to exchange information or access resources is constrained by the speed of the network "pipeline," which consists of various components such as the backbone and LANs, and overall network speed is determined by the speed of the slowest component of the pipeline. Site visit participants confirmed the importance of identifying and measuring speed at the slowest part of the network.

Through discussions with the site visit participants and IT experts, terminology about component speed was defined and incorporated into draft questions. Examples of issues related to the terminology used in these questions included the following:

When presented with draft questions on these topics, some respondents misunderstood the items because they assumed that the questionnaire would ask about LAN connections. Several respondents said that connections between desktop ports and the institutional backbone are rare. Typically, they reported that desktops are connected to a LAN, which is then connected to the institutional backbone.

In response to the issues raised during the site visits and in conversations with other IT experts, the final version of the questionnaire focuses on the speeds of the following elements of the institution's computer network:

Although these questions capture the speed of the separate components of the institution's network, they do not describe the overall speed at which individuals can exchange information and efficiently access resources. To obtain a more thorough understanding of these issues, questions were developed asking about:

As with all the questions in the survey's new section , the intent of these questions is to measure institutional capacity, that is, "What was possible at the institution? What could a user theoretically expect from the network?" In the course of the site visit discussions, most participants came to understand this intent and felt confident they could provide the data. Frequently, however, participants made statements that implied uncertainty about whether the questions were measuring capacity or usage.[10]

An attempt was made to clarify the intent of these questions by adding the word "theoretical." It was hoped that this term would help respondents focus on potential speed regardless of temporary network constraints (e.g., number of users or volume of data being transmitted). However, this revision produced questions that respondents found too hypothetical and lacking in definition. These problems were indicated by instances in which respondents described increased speeds that could result if the institution's network were reconfigured.

A second attempt was made to clarify the question intent. Explanatory phrases (e.g., "With your current network configuration…") were added to the questions, and examples of how to calculate the maximum speed were provided. During subsequent site visits, respondents reacted favorably to these changes and the revisions seemed to provide the necessary clarification.

Presence of an Internet2 Connection top.

Several versions of questions addressing access to Internet2 were tested.

The first version asked if the institution had an "advanced/high performance research network connection," whether the connection was separate from the commodity network connection, and the type (speed) of the connection. Some respondents were unclear about what types of connections should be considered.

In the second version of the question, Internet2 was specifically included as one of three advanced/high-performance connections.[11] Most respondents were familiar with Internet2.

In the final version, respondents were asked if their institution has a connection to Internet2 at the end of the current fiscal year. They were provided with a description of Internet2 that includes the nature of the consortium and the utilization of the Abilene network.

Inclusion of IT Activities in Planning top.

Whether or not an institution addresses ways to enhance IT, either in its regular operations or its planning efforts, was identified by several institutions as a predictor of future capacity to incorporate computing and networking into research endeavors. Respondents from smaller research facilities were particularly interested in providing these data.

The question drafted to measure this indicator asked respondents whether their institutions had engaged in various activities such as performing upgrades of software or discussing IT in the institution's mission statement. Some site visit participants mentioned that the activities in question might be performed at either the institutional or departmental level. They cautioned that in cases in which an activity is only performed at a departmental level, some respondents might answer affirmatively to a question focused on institutional performance rather than report no performance.

In response to these observations, the final version of the question allows respondents to report activities at the institutional level, the departmental level, or both. The list of activities includes faculty training in the use of IT, a strategy for network replacement, personal computer upgrades, and operational and application software upgrades.

Computation Rates top.

Computing and networking issues workshop participants strongly recommended that the new survey section address computation rates at the institutions. Computation rate refers to the number of operations a computer, or set of computers, can perform per second while working on a single application. Specifically, it was suggested that the survey include a question that measured the number of floating point-to-point operations per second available to users. This measure would be one indicator of the institutional capacity to perform some types of high-performance computing.

A question measuring computing speed was drafted and tested during several site visits. Most respondents clearly understood the question, but a small number were unclear about its meaning. Discussions with those respondents indicated that the placement of the question immediately after questions addressing the use of advanced/high-performance computing methods led them to believe that it asked only about computing speed achieved in conjunction with advanced or high computing. To clarify the question intent, the placement was changed.

Workshop participants also suggested asking questions about system configurations that institutions might use to aid in the manipulation of massive amounts of data in a very short time and to facilitate collaboration across geographically distributed sites. One impediment to developing survey questions related to these topics was that the terminology used to describe configurations is still in flux. For example, early versions of the questions asked about the use of supercomputers, distributed/parallel computing, and grid technology. Respondents offered conflicting definitions of these terms when probed; sometimes even respondents from the same institution were unsure how to classify the methods used at their institutions.

The questions were refined and the final version asks respondents if their institutions currently have the capability to conduct high-performance computing on campus or to use grid technology extending beyond the campus. Four strategies were used to refine the questions:

Together, these refinements clearly define the capabilities for respondents to use when answering the survey questions and help them to distinguish between the capabilities.

Extent of Wireless Coverage top.

Wireless technology is an emerging resource that contributes to networking capacity. Some institutions have established wireless capacity across their entire campuses, reflecting a possible trend in computing and networking resource allocation. Institutions may adopt wireless coverage for a variety of reasons, including as a way to avoid the expense of wiring the campus or to provide network access in locations that cannot be easily wired. Questions were developed to assess the degree to which institutions have committed to wireless technology and to measure their plans to do so in the future.

Question development efforts focused on two aspects of wireless technology: coverage and standards (i.e., frequency and speed). Initially, respondents were asked to report the percentage of the institution that was covered by wireless capability. Often, respondents presented with this wording were unclear about how to formulate their answers:

To help resolve this issue, the final versions of the questions ask about the institution's "building area" covered by wireless capabilities for computer network access. This language excludes open areas of the campus (e.g., quadrangles, parking lots, and athletic fields). It also is meant to encourage respondents to consider the entire area within buildings, including the square footage on every floor of the buildings.

Participants at two institutions suggested that there is a difference between area that has access to wireless capabilities and area that is covered by the capabilities, citing that access levels can change depending on the position of the installations because access points (installations situated inside or outside of buildings to receive and transmit the communication) can be moved. On the other hand, they felt that use of the term "coverage" would imply that respondents should consider fixed access points. The final versions of the questions ask about coverage with a fixed point in time.

Remote Access Instrumentation top.

Efforts were made to develop questions that address the use of science or engineering instruments such as those used for the remote sensing of earth or space or electron microscopy. The questions focused on two areas:

Participants in earlier site visits were asked to comment on the question, "Does your institution have the capacity to connect to remote shared instruments?" Although respondents understood the intent and terminology of this question, they did not think that the data it would generate would be meaningful; they felt that all institutions would report having the capacity to access remote instruments as long as they had access to the Internet. Respondents from one institution asked if the question was intended to capture use of, rather than the ability to connect to, the instrumentation. They added that if the intent were to measure use, responding to the question would be very difficult given that they do not track usage.

As an alternative means of addressing remote access instrumentation, questions were drafted to measure the existence and use of instruments maintained by the institution and accessed by remote users. Respondents were asked to report the number of instruments maintained by the institution and to identify the instrument with the highest and second-highest level of usage (measured in hours of use). For each of these instruments, respondents were asked to describe:

Respondents from each of the institutions where these items were tested were unable to answer these questions. They indicated that this data were not available at an institutional level and polling of individual research departments within the institution would be required to gather the information. These questions were dropped from the final version of the questionnaire.

Curated Databases top.

Increasing use of curated databases motivated attempts to add this topic to the new section of the Facilities survey. Members of the methodology panel suggested that the 2003 survey should ask about the number of databases and that additional information on the databases could be gathered in later survey cycles.

Questions were developed on curated databases (defined as "databases that are actively maintained and updated by your institution…not simply a copy of a database produced by others or a previously developed database that requires no further maintenance") for S&E research that are made available to others outside the institution. Respondents were asked if their institutions maintained this type of database and, if so, to provide information about their two largest databases. One of the questions asked respondents to report the number of times the database was accessed by users from outside the institution. Respondents were reminded to report the number of times the database was accessed rather than the number of times the website was accessed.

Once again, respondents from all the institutions where these questions were tested reported that information on such databases was not maintained at the institutional level and that departmental polling would be needed to gather the data. These questions were dropped from the final version of the questionnaire.




Footnotes

[7]  The questions asked during the site visits focused on a 3-year time frame rather than the 2-year time frame reflected in the final version of the questionnaire. The 2-year period was chosen for consistency with the rest of the survey and the survey's biennial timing. The main benefit of using the 3-year period appeared to be the shift away from a focus on current condition so that institutions would not be in a position in which they were asked to provide negative appraisals of existing condition.

[8]  The $100,000 threshold was later changed to be consistent with the rest of the questionnaire.

[9]  For development purposes, the questionnaire used in the cognitive interviews focused on the current year at the time of the test with the understanding that the actual questionnaire would update the years.

[10]  Usage differs from capacity in that it accounts for temporary constraints placed on the network (e.g., number of users, types of data being exchanged).

[11] The other types of connection included in this question were "peer-to-peer" and "other advanced/high-performance connections." Frequently, respondents suggested that the meaning of peer-to-peer and advanced/high performance connections was ambiguous.


Previous Section. Top of page. Next Section. Table of Contents. Help. SRS Homepage.