Skip to main content

Affidavit Of Michael J. Friduss On Behalf Of The U.S. Department Of Justice

This document is available in two formats: this web page (for browsing content) and PDF (comparable to original document formatting). To view the PDF you will need Acrobat Reader, which may be downloaded from the Adobe site. For an official signed copy, please contact the Antitrust Documents Group.

AFFIDAVIT OF MICHAEL J. FRIDUSS
ON BEHALF OF THE
U.S. DEPARTMENT OF JUSTICE

I.
PROFESSIONAL BACKGROUND

1. My name is Michael J. Friduss. My business address is 1555 Museum Drive, Highland Park, IL 60035. I am an independent consultant working with C.A. Hempfling & Associates, Inc., under contract with the Antitrust Division of the United States Department of Justice.

2. I received a Bachelor of Science degree in Industrial Engineering from the Illinois Institute of Technology in 1964 and a Masters degree in Management from Northwestern University in 1971.

3. I began my telecommunications career in 1964 as a Management Assistant for Illinois Bell Telephone Company ("Illinois Bell"). In this capacity, I filled a variety of non-management and management positions designed to familiarize me with all departments of the company.

4. From 1966 to 1969, I was a Manager in Illinois Bell's Plant Department. In this capacity, I supervised installation or repair operations in three different territories on the South side of Chicago.

5. In 1969, I was promoted to District Engineering Manager, responsible for the engineering and design of outside plant, also on Chicago's South side. In 1970, I was appointed District Plant Manager, responsible for installation and repair activities in Chicago's Hyde Park area. During my tenure in Hyde Park, I also headed an Operation Review team that assessed the quality and cost performance of each district in Chicago Operations.

6. I was promoted to Division Manager—Corporate Planning at AT&T in New York in 1973 and served through 1975. In this capacity, I headed a small group responsible for the study of the telecommunications interexchange industry at that time and what AT&T's future strategy should be in that segment of the industry.

7. In 1975, I returned to Illinois Bell as Division Plant Manager, responsible for installations and repair in the South suburban area. In 1978, I was named Division Manager—Corporate Planning for the company, responsible for Illinois Bell's planning and operations budgeting, including operations planning for the implementation of the FCC's Computer Inquiry II and divestiture.

8. In 1983, I was promoted to General Manager—Distribution Services, responsible for Illinois Bell's outside operations, construction, and engineering. In this capacity, I supervised 7,000 employees and a budget of $500 million.

9. In 1986, I was promoted to Vice President—Personnel and Support Services for Michigan Bell and in 1989 was named Vice President—Customer Sales and Service for the same company. In the latter role, I was chief operating officer of a company and a member of the Board of Directors, with responsibility for operations and sales, including 11,000 employees and expenditures in excess of $1 billion.

10. In 1992, I returned to Ameritech Services as Vice President—Customer Service and Information Technology, responsible for the strategic and tactical direction of Ameritech's customer service and operations, as well as planning, building, and maintaining high quality and efficient computer systems (chief information officer). I retired from this position in 1993.

11. In late 1993, I formed MJ Friduss & Associates, consultants to the telecommunications industry. Our clients are carriers, primarily current and new local service providers, and small to medium-sized companies that provide hardware, software, and operating systems to those service providers. We are currently working with a number of firms in the areas of strategic planning, marketing, operations, customer services, and supplier management.

12. Additionally, I am Editor of the Friduss Report, a newsletter focused on carrier procurement processes.

II.

SCOPE OF ASSIGNMENT

13. I have been asked by the Antitrust Division of the United States Department of Justice for my opinion regarding the appropriateness and comprehensiveness of the performance measures BellSouth proposes to provide to competitors and regulators. In particular, I have been asked whether these performance measures will reasonably depict the performance of wholesale functions BellSouth is obligated to perform pursuant to the competitive checklist of section 271 of the Communications Act of 1934 (as amended by the Telecommunications Act of 1996) and whether such measures will enable competitors and regulators to determine both the adequacy of BellSouth's performance and the parity of such performance when compared to BellSouth's retail operation.

14. The primary source upon which I relied for my analysis is BellSouth's section 271 application for South Carolina. I generally reviewed the application for any discussion of performance measures. Additionally, I have reviewed:

  • The FCC's Quality of Service report, which summarizes quality of service based on data submitted by the Bell Operating Companies (BOCs), GTE, and Sprint.
  • BellSouth's application, including a Statement of Generally Available Terms (SGAT), before the South Carolina Public Service Commission (SCPSC) to provide interLATA telephone service in South Carolina.
  • Testimony before the SCPSC related to BellSouth's application for entry into the interLATA toll market in South Carolina.
  • The Telecommunications Act of 1996 (1996 Act).
  • Interconnection agreements between the BOC and competitive local exchange carriers (CLECs) in South Carolina.
  • Performance measure proposals by other BOCs, as well as proposals by several CLECs.
  • The LCI/Comptel Petition for Expedited Rulemaking to Establish Reporting Requirements and Performance and Technical Standards for Operations Support Systems.
  • My affidavit in connection with SBC Communication's Section 271 application for Oklahoma.
  • The FCC's Opinion and Order on Ameritech's Section 271 application for Michigan.

15. I have also attended meetings with BellSouth and several CLECs interconnecting with or negotiating to interconnect with BellSouth.

16. Additionally, I have reviewed performance measures proposed by other BOCs in various proceedings in other states.

17. Finally, in reviewing BellSouth's proposals, I have drawn upon my significant experience with quality performance standards. As a telephone company line manager and officer, my performance was judged, in part, by how well I met customer service objectives. Further, as a staff manager, I had responsibility for the development and implementation of quality performance standards.

III.

PERFORMANCE MEASURES AND THEIR ROLE

18. The 1996 Act obligates incumbent local exchange carriers (ILECs), and thus BOCs, to provide requesting carriers with interconnection, access to unbundled network elements, and resale services. In fulfilling these obligations, BOCs will perform a variety of wholesale functions for competitors, many of which BOCs also perform in providing retail services. Some of these functions, however, will be new.

19. The ability to detect discrimination in the performance of these functions is dependent on the establishment of performance measures that will allow competitors and regulators to measure BOC performance. Thus, the development of appropriate measures is critical to establishing that the local market is a level playing field in the context of the 1996 Act. Further, on an ongoing basis, the measures must be able to assure that the local market remains open and that any BOC backsliding will be detected.

20. Performance measures, then, serve as criteria for indicating performance, including the performance of wholesale functions. Performance measures enable competitors and regulators to compare a BOC's performance of a function with that provided to a BOC's retail customer or make an assessment of such function in the abstract. For example, to measure how well a BOC performs the functions of provisioning resold local service, we can define a performance measure—"average service provisioning interval"—and use it to describe the BOC's performance and to compare it to the BOC's retail performance of the same function. In general, performance measures are used to determine quality, measuring how long an activity takes to complete (cycle time) and how well the activity is performed (reliability).

21. A performance measure may include an objective or target, such as the cycle-time measure "five days to complete an order," where overall the measure is a percentage of orders meeting or not meeting the target. A performance measure can also encompass a raw time interval, such as the average number of days to complete resale orders. In neither case, however, does the outcome of the measure—the percentage or cycle time—itself indicate "good" performance or "bad" performance. Thus, performance measures themselves are not the barometers of performance, but rather the yardsticks with which to measure such performance. Accordingly, my review is limited to the sufficiency of BellSouth's performance measures rather than the sufficiency of its performance.

22. The most competitively significant, and thus the highest-priority performance measures should be those that describe the end-to-end quality of service from the customer's viewpoint. Studies over the years have identified performance measures that correlate highly with the customer's perceptions of service quality, such as the percentage of repeat reports of trouble, while others have a lower correlation.

23. Finally, while performance measures are generally easy to identify, there is no universally accepted definition of what a measure proposes to reveal or specifically how to gather the necessary data that comprises the measure. For example, cycle-time performance measures are dependent on the specific definition of start and stop times, while reliability measures are dependant on the specific definition of what constitutes a failure. This affidavit does not attempt to specify these definitions. However, it is critical that BellSouth and interconnecting CLECs do so to ensure useful results. I have assumed that all parties will commit to reporting results that reflect the spirit, as well as the paper definition, of a performance measure. For example, in measuring the level of missed appointments, the result should be measured against the customer-requested due date; due date changes should only be considered where explicitly requested by the end user or explicitly agreed to by BellSouth and a CLEC.

24. As is discussed more fully below, my review of BellSouth's proposed performance measures includes an assessment of (1) the scope of the functions measured; (2) the specific definitions of the measures; (3) the value and applicability of the measures through the appropriate disaggregation of functions, markets, and products; (4) the stability of the measures; (5) the scaleability of the measures; and (6) whether the proposed measures will allow CLECs and regulators to compare BellSouth's wholesale and retail performance of the functions measured.

  1. BOC PERFORMANCE MEASURES TO DATE

25. Over the past 120 years, telephone companies have developed extensive measures of customer service. These performance measures have generally served two purposes: (1) to allow for the comparison of performance between managers, territories, organizations, and companies, and (2) to provide regulators with indicators of potential problems. These measures cover all areas of customer-affecting performance, including customer care, provisioning, repair, billing, and network maintenance. Regulatory requirements notwithstanding, these performance measures comprise a key indicator of management success. Objectives are set, data is gathered, reports are published, and results become part of the corporate, organizational, and individual success determination.

26. Using performance measures, most state public utility commissions require achievement of certain levels or standards of performance for customer service. For example, the SCPSC requires results reported for the following:

  • Trouble reports per hundred access lines
  • Customer out of service trouble clearing times
  • Held orders over 30 days
  • Percentage of service orders for installations and reinstallations completed within five working days.
  • Percentage commitments fulfilled (missed appointments)
  • Trunk failure rates
  • Loop transmission measures:
    • DC line current
    • Circuit loss
    • Circuit noise
    • Power influence
    • Balance
  • Dialtone delay
  • Toll and operator assistance call answer time
  • Repair service answer time

27. The FCC requires the BOCs, GTE, and Sprint to submit quality-of-service data that is summarized annually in a report entitled "Quality of Service for the Local Operating Companies Aggregated to the Holding Company Level." Without specifying particular levels, the report includes the following performance measures:

  • Percent of installation appointments met
  • Average missed installation in days
  • Average repair interval
  • Initial trouble reports per 1000 access lines
  • Troubles found per 1000 access lines
  • Repeat trouble as a percent of initial trouble reports
  • Complaints per million access lines
  • Switches with downtime
  • Average switch downtime in seconds per switch
  • Unscheduled downtime over 2 minutes per occurrence
  • Scheduled downtime over 2 minutes per occurrence
  • Trunk groups with blocking as a percent of total trunk groups

28. Thus, to date local exchange providers have reported on a significant list of measures of their retail performance. Given the new wholesale role imposed on ILECs by the 1996 Act and the many new functions to be performed in that role, some new performance measures will be required to both accurately describe existing performance and depict performance of new functions.

  1. PARITY VERSUS ADEQUACY PERFORMANCE MEASURES

29. Under the wholesale/retail model imposed on ILECs by the 1996 Act, there are two categories of measurements used to depict ILEC performance of a particular function: parity performance measurements and adequacy performance measurements. When a BOC's performance of certain functions for its retail units or "end user" customers is identical or analogous to the performance of those functions for competitors or their customers, parity performance measures apply. Parity performance measures are used to juxtapose performance results, such as comparing trouble report rates of a BOC's customers with those of a competitor's customers. Thus, parity performance measures are used for "apples-to-apples" comparisons and are most often applied in the resale environment, where the functions a BOC performs for a competitor's customers are almost identical to those performed for its own retail customers.

30. In contrast, adequacy performance measures facilitate the establishment of an objective or target pertaining to functions a BOC either (1) performs only for competitors, or (2) performs for competitors in a manner sufficiently different from that performed for the BOC itself such that a comparison is meaningless or unhelpful. Thus, adequacy performance measures apply in "apples-to-oranges" comparisons and facilitate a determination of whether CLECs are afforded a meaningful opportunity to compete. Adequacy measures apply primarily in the UNE environment.

  1. MARKET AND PRODUCT DISAGGREGATION OF PARITY PERFORMANCE MEASURES

31. Meaningful determinations of parity performance require "apples-to-apples" comparisons of the functions performed by a BOC. Where, for example, the same function is performed by different personnel, with different facilities, or for different customer classes or products, more refined comparisons are required. Thus, for example, the function of installing POTS service for consumer and business customers may be identical, but because business customers may be more sensitive to installation delays, a meaningful comparison may require juxtaposition of only business customer installation intervals.

32. There are two general categories of such further disaggregation. First, market parity refers to equality between appropriate customer groups. Customer groups may be broken out geographically or by class of service. Geographic market parity means comparing CLEC results to BOC results within the geography the CLEC has chosen to offer service. For example, if a CLEC offers resale service only in city A, a meaningful comparison may require the BOC to provide their retail results only for city A.

33. Class of service market parity means comparing CLEC results to BOC results within the classes of service the CLEC has chosen to offer. For example, if a CLEC offers service to small-business end users only, for purposes of comparison a BOC may have to provide its retail results for such small-business users.

34. A second category of disaggregation is product parity. Where the provision of different products to the same or different customer group requires use of different facilities, personnel, and so forth, meaningful parity comparisons may require disaggregation of performance results by the products offered by a CLEC. Product groups may further be broken out both by wholesale category and by specific products offered to end users. Wholesale categories include resale, UNE (possibly further broken out by loop-only, UNE combinations, and so forth), and facilities-based. Performance measures are required for each wholesale category. Specific products offered to end users include POTS, HICAP, Subrate, ISDN, or Centrex. For example, if a CLEC chooses to offer ISDN, a BOC could provide performance measurements that would allow for a comparison with their own ISDN retail product.

  1. REPORTING REQUIREMENTS

35. Once appropriate performance measures have been agreed to and the data gathered, the results must be formatted into reports and provided to CLECs and regulators. My review will include proposed report formats, report frequency, the appropriateness of result comparisons, report accuracy and completeness, and the availability of raw data.

36. Report format relates to how performance measure results are presented. Are they presented in tabular or graphical form? Are they readable and understandable? Can a CLEC or regulator determine whether parity has been achieved? Report frequency relates to how often reports will be provided. Report accuracy and completeness relate to the statistical validity of the proposed data. Appropriateness of result comparisons relates to the entities for which the data will be provided: BOC retail? BOC subsidiaries? the CLEC? all CLECs? other?

IV.

OVERVIEW OF BOC WHOLESALE FUNCTIONS

37. It is helpful to divide the functions BOCs will perform for CLECs under the 1996 Act into five primary categories: pre-ordering, ordering, provisioning, maintenance and repair, and billing functions. These categories describe the spectrum of functions through which CLECs acquire new customers, maintain facilities for them, and bill them. Within each category, performance measures identify the cycle time and reliability of each function. Performance parity is achieved if CLEC resale customers enjoy cycle time and reliability of functions equivalent to that experienced by the BOC's customers or its affiliates' customers. Performance adequacy is achieved if, for example, through the provision of network elements, CLECs are afforded a meaningful opportunity to compete.

38. Pre-ordering describes the initial process of a CLEC or BOC customer service representative obtaining information to place an order for new, additional, or changed service. Pre-order cycle-time performance measures generally refer to the reliability and response times of operations support systems (OSSs) that allow the representative to complete the service order with the customer on the line. Pre-order reliability performance measures refer to the accuracy and completeness of the data received. These pre-ordering functions are generally visible to the end user.

39. Ordering describes the process of the service representative transmitting the service order into the BOC's OSSs for facility assignment, database updates, switch updates, and dispatch of a technician, if required. For a CLEC, this includes successfully moving the service order across an agreed-upon interface into the BOC's OSSs. Ordering cycle-time performance measures refer to BOC response times for notices of order confirmation, jeopardy, or rejection. Ordering reliability performance measures refer to the accuracy and completeness of these notices, as well as the percentage of rejected orders. Ordering performance measures also address the percentage of service orders that "flow-through" from a service representative to completion if no technician dispatch is required or to the point of dispatch if dispatch is required. OSS availability and BOC service center answer time performance measures may also be considered to be part of the ordering process. Ordering is generally transparent to end users.

40. Provisioning involves the execution of a request for a set of products and services or unbundled network elements with attendant acknowledgments and status reports. Provisioning performance measures measure how quickly and well customer service orders are completed. Provisioning results are highly visible to end users and are critical to a determination of performance parity. Provisioning cycle-time performance measures refer to measuring the interval, from the end user's perspective, from order placement to order completion. Provisioning reliability performance measures refer to the accuracy of the work done (i.e., did the end users receive what they ordered) and to the quality of the work done (i.e., did everything work).

41. For purposes of this review, I have evaluated categories of repair and maintenance separately. Repair is the process by which end users report a case of trouble and the trouble is subsequently cleared. This process is highly visible to the end user and has a high correlation with the end user's perception of the service provider. Repair cycle-time performance measures depict the interval from end-user report to trouble clearance and notification. Repair reliability performance measures measure the quality of the repair operation.

42. Maintenance refers to how well the network itself is maintained, and associated performance measures generally refer to reliability rather than cycle time. The most visible performance measure is the mean time between troubles, often referred to as the trouble report rate. Other performance measures measure how well the BOC's switching and transmission elements are maintained.

43. Billing performance measures describe the speed, accuracy, and completeness of end-user usage data from the BOC to the CLEC. While the process may be transparent to the end user, the end product is highly visible.

44. There are several miscellaneous functions that must also be measured. These include toll and directory assistance operator services, directory listing, and 911 database updates.

V.

REVIEW OF BELLSOUTH'S PROPOSED PERFORMANCE MEASURES

45. This part of the affidavit addresses the performance measures explicitly cited in BellSouth's application, performance measures included in existing interconnection agreements, performance measures included in BellSouth's SGAT, and performance measures not explicitly or implicitly cited by BellSouth that are important to measuring functions required under the 1996 Act. Section A discusses BellSouth's commitment to providing CLECs with services at parity with its retail operations and performance measures that will show such parity. Section B reviews all such measures under the assumption that they would be reported, as discussed more fully below, to both competitors and regulators on an ongoing basis. In particular, Section B addresses the proposed performance measures for each wholesale process—pre-ordering, ordering, provisioning, repair and maintenance, and billing—described above. Sections C and D describe methods of disaggregating those performance measures to more accurately perform parity and adequacy assessments by market and product. Finally, Section E discusses the need for consistent and accurate reporting and highlights those measurements BellSouth has indicated will be reported to both competitors and regulators for purposes of this application.

46. Most of the resale performance measure examples discussed below are not new. Many are tracked and reported by BOCs for retail operations and are reported to state or federal regulatory bodies. At the same time, UNE performance measures, although similar to resale, measure the performance of wholesale functions that are new to the BOCs.

47. It is important to note that this affidavit is not an attempt to prescribe a model set of performance measures or an attempt to lay out a minimum set of performance measures that would meet the requirements of the 1996 Act. I discuss below historically and widely used, newly appropriate, or exemplary performance measures for each of the wholesale functions BOCs will perform under the 1996 Act, and variation from those discussed may be possible without necessarily impacting the ability to determine parity or adequacy of performance.

  1. BELLSOUTH'S COMMITMENT TO PARITY

48. BellSouth's application for provision of in-region, interLATA service in South Carolina commits to equal quality of resale services and interconnection to new entrants and nondiscriminatory provision of unbundled elements (Stacy Performance Aff. ¶ 2). BellSouth further commits to "collect all necessary data to demonstrate this fact" (Stacy Performance Aff. ¶ 86). It expressly proposes to provide the measures discussed below, which are broken out by process.

49. BellSouth states that its existing performance measures are more than adequate to allow for the detection of "non-discrimination" and "meaningful opportunity to compete" standards (Stacy Performance Aff. ¶ 3). These measurements are portrayed as being developed in three different formats: initial measurements, historically used by BellSouth Telecommunications (BST) and applied to BST and CLECs; AT&T measurements, contractually agreed to with AT&T; and permanent measurements, based on the AT&T measurements but with additions. (Stacy Performance Aff. ¶ 16)

50. BellSouth Telecommunications has created a new and separate officer-level organization responsible for all operational aspects of provisioning and maintenance of services provided to CLECs. Two Local Carrier Service Centers (LCSCs), available 24 hours a day 7 days a week, have been established to provide contact points for CLECs ordering resale or UNEs. Further, a Customer Support Manager is assigned to each CLEC as a single point of contact for CLECs whose customers have operational issues not resolved by normal processes. (Stacy Performance Aff. ¶ 4)

51. BellSouth's SGAT filed with SCPSC contains a commitment to parity (SGAT § I. (I), (J)) but proposes no specific performance measures.

52. BellSouth has interconnection agreements with 83 telecommunications carriers in South Carolina. Two are included as exhibits to Stacy's affidavit: the agreements with AT&T and Time Warner. BellSouth reached agreement with AT&T on performance measures as part of their agreement and filed these measures with the SCPSC (Stacy Performance Aff. ¶ 28). The two companies have agreed to extend these measures to all nine BellSouth states. Further, BellSouth and Time Warner have agreed to performance measures in their interconnection agreement, executed on September 5, 1997 (Stacy Performance Aff. Ex. WNS-5). Both these interconnection agreements contain additional performance measures that have not been proposed in BellSouth's permanent measurements.

  1. BELLSOUTH'S PROPOSED PERFORMANCE MEASURES

53. Pre-ordering: Pre-ordering performance measures revolve around the ability of a CLEC service representative to complete an order with an end user on line with at least the speed and accuracy of a BOC service representative taking a similar service order from a retail end user. Since CLEC service representatives will likely interface with BOC OSSs and with BOC service representatives, performance measures are needed to measure the cycle and reliability of both interactions. These measurements will ensure that BOC service representatives do not have an unfair advantage in creating a superior end-user perception of speed and efficiency. Typical pre-ordering performance measures include the following:

  • Pre-order OSS Availability: Measures both the hours and days the BOC's pre-order OSSs are available to CLECs and non-scheduled downtime. This performance measure is important because it ensures that a CLEC, which may have different service center hours than the BOC, will have access to the systems and databases it requires when they are needed.
  • Pre-order System Response Times: Measures, in seconds, the speed with which the CLEC Service Representatives receives information for processes described below with a customer on the line. These cycle-time measures assume the CLEC has mechanical access to the BOC databases and should be measured in a manner that allows appropriate comparisons to like cycle times experienced by BOC retail service representatives. They are important because customer perceptions of service are impacted by the speed and efficiency of their service center contact.
    • Address verification
    • Request for telephone number
    • Request for customer service record (CSR)
    • Service and product availability
    • Appointment scheduling

54. BellSouth has not proposed any pre-ordering performance measures in its permanent measurements, in its SGAT, or in interconnection agreements that I have reviewed.

55. Ordering: Ordering performance measures revolve around measuring the CLEC's ability to process end-user service orders placed with the BOC and delivered through the BOC's OSSs with speed and accuracy at least equal to the BOC itself. Ordering cycle time is primarily measured by the promptness of communications between the BOC and the CLEC. Ordering reliability is measured by the accuracy of the service order and by the success of order "flow-through." Typical ordering performance measures include the following:

  • Firm Order Commitment (FOC) Cycle Time: Measures the time from CLEC service order submission to BOC response, confirming receipt of a properly formatted and appointed order. Can be presented as a mean interval or as the percentage returned within an agreed upon interval. This is an important measure because it helps depict whether CLEC service orders are processed in a manner which leads to overall provisioning interval parity.
  • Rejected Order Cycle Time: Measures the time, from CLEC service order submission to BOC response, for rejecting an incomplete service order or one containing errors. Each submission of an order, up to and including the FOC, requires a response cycle-time result.
  • Service Order Cycle Time: The average time it takes to process a CLEC service order, measured from the first time the order reaches the BOC interface to the order being placed in queue for completion. Comparisons can be made to equivalent BOC cycle times to assure the CLEC of processing parity. Service Order Cycle Time captures both reject and commitment intervals.
  • Ordering Quality: The following performance measures, along with Service Order Cycle Time, are important determinants of service order processing parity or adequacy. Each is important in its own right and provides insights into different aspects of order quality; however, the entire set would not be required as a determinant of discrimination. For example, Service Order Accuracy is likely to correlate highly with Percent Rejected Orders and with Order Submissions per Order.
    • Service Order Accuracy: Measures the quality of service order up to the BOC gateway in terms of errors per service order. It tends to reflect more on the CLEC than on the BOC and would be difficult to track.
    • Percent Rejected Orders: An important measure of order quality that reflects on both the BOC and the CLEC. Measured at the BOC gateway, it is the result of dividing rejected orders by total orders submitted, manually or mechanically. It is an adequacy measure because there are no equivalent BOC analogs. BOC orders are "rejected" via automatic edits before the order leaves the service representative position.
    • Order Submission per Order: Another important determinant of order quality. Measured at the BOC gateway, it is determined by dividing total order submissions by the number of orders receiving a firm order commitment.
    • Percent Flow Through: Measures the percentage of service orders that flow from the BOC gateway to completion queue without manual intervention. Flow-through can be a parity measure in a resale environment and an adequacy measure in a UNE environment. Unless reprogrammed, it is unlikely that BOC OSSs will discriminate between BOC and CLEC service orders. Therefore, although important as a determinant of processing efficiency and one that the BOCs have historically used for this purpose, it is unlikely that Percent Flow Through will prove either parity or discrimination.
  • Ordering OSS Availability: Measures both the BOC ordering OSS hours of operation and the reliability of the systems.
  • Ordering Center Availability: Measures the hours and days of operation of the BOC ordering center.
  • Speed of Answer—Ordering Center: Measured in average time to reach a BOC service representative. This can be an important measure of adequacy in a manual environment or even in a mechanized environment where CLEC service representatives have a need to speak with their BOC peers.

56. BellSouth has proposed the following ordering performance measures:

  • Firm Order Commitment (FOC) Cycle Time: Not yet available. Measures FOCs returned in less then 4, 6, 8, 12, 24 hours for orders that flow through without human intervention, excluding rejects (Stacy Performance Aff. Ex. WNS-8 ¶ 4b). Combines residence and business, but excludes any order requiring human intervention. This measure, as defined, should include all orders and should separate residence and business orders. FOC cycle-time performance measures are included in BellSouth's interconnection agreements with AT&T and Time Warner.
  • Rejected Order Cycle Time: Not yet available. Measures percent rejected orders returned in less than one hour. Included in BellSouth's interconnection agreements with AT&T and Time Warner.

57. BellSouth has not included the following ordering performance measures either in its permanent measurements or in interconnection agreements that I have reviewed:

  • Total Service Order Cycle Time
  • Any measures of service order quality. All of the following are not required, but one or more is necessary to determine the reliability of the CLEC service order submission process:
    • Service Order Accuracy
    • Percent Rejected Orders
    • Order Submissions per Order
    • Percent Flow Through
  • Ordering OSS Availability
  • Ordering Center Availability: However, BellSouth has committed to 24 hours a day 7 days a week availability.
  • Speed of Answer—Ordering Center

58. Provisioning: Provisioning performance measures depict how quickly and how accurately end-user service orders are completed. Parity in performing provisioning functions results in CLEC customers receiving service with speed and quality at least equal to that received by BOC retail or subsidiary customers. Provisioning measures have a long and detailed history within the BOCs. They are used to review and compare manager performance, as well as required by state and federal regulatory bodies. Provisioning is a process highly visible to end users and, therefore, is a key determinant to CLEC success in the marketplace. Typical provisioning performance measures include the following:

  • Service Provisioning Interval: A critical determinant of provisioning parity or adequacy, the interval measures the time from customer request for service to completion when the appointment is offered by the BOC, either from a common appointment database, generally used in a resale environment, or by agreed-to appointment intervals, more commonly used in a UNE environment. Service Provisioning Interval should be measured both as a mean, or average interval, and as a percent over a standard interval. Only next available appointments offered from the work schedule OSS should be included for measurement. Customer-requested due dates, shorter or longer than the offered appointment, should be excluded.
    • Average Service Provisioning Interval: Measured in days from end-user request to order completion and counted separately for dispatched and non-dispatched orders. Average interval is the more important of the two measures because it depicts the result for all orders rather than just the "tail," or orders completed out of interval. For example, if the BOC completes 95% of its own retail service orders within 5 days and 95% of a CLEC's resale orders within 5 days, it is possible that the mean interval for the BOC retail orders could be significantly different (higher or lower) than the CLEC's orders.

      Provisioning in a resale environment calls for parity performance measures, while provisioning in a UNE environment generally calls for adequacy performance measures. Some UNE processes are more analogous to BOC retail processes than others; however, statistically valid performance parity comparisons require mirrored processes provided to the CLEC and to BOC retail customers. Thus:

      • BOC Retail to CLEC Resale Migration: When a customer is moving from BOC retail service to CLEC resale, provisioning interval is a parity performance measure, comparing equivalent processes from the customer's viewpoint.
      • No Service to CLEC Resale Migration: Provisioning interval is a parity measure, comparable to new service offered by the BOC to its retail customers.
      • BOC Retail to CLEC UNE Migration: When a customer is moving from BOC retail service to CLEC UNE-based service, provisioning interval is likely to be an adequacy measure used to indicate whether the CLEC is providing a "meaningful opportunity to compete." UNE loop provisioning clearly calls for such measures because of the non-analogous functions provided to the CLEC. UNE platform provisioning is less clear. On one hand, an end-to-end combination of elements may look like resale to the end user and provisioning of such a combination may require analogous BOC software changes only. At the same time, the BOC may have internal network element inventory or other changes to make that would render the overall process non-analogous.
      • No Service to CLEC UNE Migration: A provisioning adequacy performance measure.
      • CLEC Resale to CLEC UNE Migration: When a CLEC chooses to move a customer from resale to UNE (loop, combination, or platform), the move may or may not be transparent to the end user. If non-transparent changes in service are made at the same time, interval is an adequacy measure (see above for loop/platform differences). If no service changes are made or the changes are otherwise transparent to the end user, a performance measure may still be appropriate, albeit related to transactional, rather than service concerns.
    • Percent Service Provisioned Out of Interval: Measured as a percentage of service orders completed more than X days. Ideally, measured incrementally by day. For example, orders completed in more than 3 days, 4 days, 5 days, and 6 days. This performance measure depicts the tail of the interval curve. Combined with the Average Installation Interval, portrays a robust picture of provisioning cycle time.
  • Percent Trunks Provisioned Out of Interval: While not related to end-user perception of service, this performance measure depicts the speed with which the CLEC can build or expand its network capability so as to provide service in a timely manner. As such, it measures whether the CLEC has been provided the wherewithal to provide local service—a "meaningful opportunity to compete."
  • Port Availability: Measures, in a facilities-based interconnection arrangement, the timely availability of switching ports through which a CLEC interconnects with the BOC's network.
  • Percent Missed Appointments—Company Reasons: A critical performance measure, when tied to provisioning interval, of provisioning cycle-time performance. BOCs have historically used this as a key measure, and reporting of results is required by many state regulatory bodies and the FCC. Missed appointments is a parity measure under resale and an adequacy measure under UNE. Order completion is measured against the original CLEC-requested due date. No due date changes may be made unless explicitly specified by the end user or explicitly agreed to by the CLEC and the BOC. Orders missed for company reasons—load, facilities, or other—are included. Orders missed due to customer reasons are not counted as a miss for purposes of this measure.
  • Percent New Service Failures: Measures the number of trouble reports on newly provisioned service during the first 7 to 30 days after order completion. Studies have shown high correlations between trouble reports and provisioning errors within 7 to 10 days, lower correlations beyond 10 days. New Service Failures is an excellent measure of provisioning quality and a reliable determinant of provisioning parity.
  • Completed Order Accuracy: Measures the extent to which orders are completed by the BOC as ordered by the CLEC. It represents the quality of the provisioning process from the BOC gateway through order completion. Completed Order Accuracy will likely correlate with New Service Failures, in that about half of new service trouble reports relate to products or services ordered but not installed or products and services installed but not ordered.
  • Orders Held for Facilities: Measures service orders not completed for a specified period time, usually 30 days, following the due date, generally for lack of network facilities. This is an important measure in determining whether the BOC prioritizes new facility work in a nondiscriminatory manner.

59. BellSouth has proposed the following provisioning performance measures:

  • Percent Service Provisioned Out of Interval: Not proposed as a permanent measurement but negotiated as part of its interconnection agreements with AT&T and Time Warner. Applied to both resale and UNE interconnection arrangements, reported by percent completed over 2 days, 3 days, 4 days, and 5 days.
  • Percent Trunk Order Due Dates Missed.
  • Percent Service Order Missed Appointments—Company Reasons: Proposed for both resale and UNE.
  • Percent New Service Failures—Reports Received Within 30 Days of Installation: Pertains to resale, UNE, and trunk circuit provisioning.

Where appropriate, BellSouth will disaggregate provisioning performance results into two sub-categories, non-dispatch and dispatch out.

60. BellSouth has not included the following provisioning performance measures either in its permanent measurements or in interconnection agreements that I have reviewed:

  • Average Provisioning Interval: This is a critical performance measurement. BellSouth states that it has gathered and produced this data but "has not agreed to incorporate this data in the results regularly produced for the CLECs or state commissions, since the set of % Provisioning Appointments Met data already indicates BST's performance in this area" (Stacy Performance Aff. ¶ 52). BellSouth argues that BST and CLECs draw appointments from the same database and further, that the OSS provides appointments on a first come, first served basis. Therefore, they argue, missed appointments are the only necessary means of detecting discrimination in the process.

    In its application, BellSouth provides a table reflecting relative BST/CLEC interval performance in a given month, concluding that the results show "substantially equal levels of performance" (Stacy Performance Aff. ¶ 53). Stacy further claims non-discriminatory performance in Exhibit WNS-10 to his Performance Affidavit, which shows average service order interval results for BST and CLECs.

    One problem with this data is that it measures the interval from service order issuance to original due date, not completion date. Second, the results represents only one month of data. Finally, analysis of the data, particularly in Exhibit WNS-10B, reveals some significant differences and may not show non-discrimination.

    Average Service Provisioning Interval is critical to a determination of parity or adequacy:

    • First, it is very visible to end users and highly correlates with their perception of their service provider.
    • While due dates may be offered on a non-discriminatory basis, completion dates are the key to this measurement. BellSouth argues appropriately that percent appointments not met may reveal the differences between the original due date and the completion date. However, this is not adequate to detect discrimination. Even if the percentage of appointments not met are equal, the average completion interval could differ significantly. For example, once missed, BellSouth could focus their attention on completing BST service orders at the expense of CLEC service orders.
    • BellSouth has made it clear that much of the data required to provide the average interval is readily and abundantly available, although some enhancements may be necessary to partition "next available appointment" orders.
  • Port Availability: The only performance measure used to detect discrimination in a total facilities-based interconnection arrangement.
  • Completed Order Accuracy
  • Orders Held for Facilities

61. Maintenance: Maintenance performance measures depict two sub-processes:

(1) trouble reporting and clearance and

(2) network quality.

  • Trouble Reporting: Trouble reporting performance measures describe how quickly and how well end-user trouble is cleared. Performance parity exists if a CLEC customer trouble is cleared with at least the same speed and quality as the BOC retail or subsidiary customer. This is a highly visible process to the end user and has significant impact on the end user's perception of the service provider. Typical trouble reporting performance measures include the following:
    • Trouble Report Rate: Measured as the number of trouble reports per customer or access line per month (usually annualized). Data is gathered by product and market categories and can be analyzed by cause and other factors. This is the most important measure of service reliability and historically positively correlates with an end user's perception of their local service provider.
    • Percent Repeat Reports: Measured as the percentage of end-user troubles on the same access line within an agreed number of days of the original trouble. Repeat reports are a key indicator of maintenance process reliability and, historically, have a positive correlation with an end user's perception of local service provider quality. Studies have shown high correlation between repeat reports and repair errors occurring within 7-10 days and lower correlations beyond 10 days.
    • Percent Out of Service Over 24 Hours: Measured as a percentage of out-of-service troubles cleared within 24 hours. This measure relates to Mean Time to Restore, but specifically measures parity in out-of-service restoral. Required by many state regulatory bodies.
    • Percent Missed Appointments: Measures the percentage of trouble reports cleared after the promised appointment. Highly visible to end users. Requires that appointment times, once set, cannot be changed except by the end user.
    • Mean Time to Repair: Measured as the average interval from trouble report to clearance. This is the key measure of trouble report cycle time. Should be gathered and reported on a product and market basis.
    • Trunks Restored Out of Interval: Measures the percentage of CLEC trunks reported out of service and restored after an agreed-to interval. Important because it impacts the CLEC's ability to handle its traffic efficiently and with a high level of quality.
    • Maintenance OSS Availability: Measures the available hours of the BOC's maintenance OSSs, as well as system reliability.
    • Maintenance Center Speed of Answer: Measures the average time to reach a BOC repair service representative. An important measure of adequacy in a manual environment or in a mechanized environment where CLEC service representatives have a need to speak with their BOC peers.
  • Network Quality: Network quality performance measures measure how well the BOC's network is maintained and whether the BOC's network performance discriminates against new entrants. Comparisons are between the performance distribution for the BOC's retail or subsidiary customers and the performance distribution for CLEC's customers. The network can be thought to be comprised of three parts: switches, loops, and trunks. Typical performance measures include Number of Major Network Events; System Signaling 7 (SS7) Link and Database Failures; Post Dialtone Delay; various transmission measures, including Loop Transmission Loss, Signal-to-Noise Ratio, Balance, and Idle Circuit Noise; and Blocked Call Attempts. Current network design, architecture, and operating systems making switching and transmission performance measure discrimination highly unlikely. Unless specifically reprogrammed to do so, the network is not likely to recognize the carrier "owner" of a call processing through it. In contrast, a key area for parity or adequacy concern is trunk blockage, where planning and engineering can have a bearing on individual carrier service quality.
    • Percent Blocked Calls: Measures trunking grade (quality) of service. It relates to proper forecasting, engineering, provisioning, and maintenance of intraLATA and interLATA trunks. Generally a parity measurement because CLEC results can be compared to similar BOC trunk group results.

62. BellSouth proposes the following maintenance and repair performance measures:

  • Trouble Report Rate: Proposed for resale, UNE, and trunks.
  • Percent Repeat Reports: Trouble reports received within 30 days of the original report are included. Proposed for resale and UNE.
  • Percent Out of Service Over 24 Hours: Proposed for resale.
  • Percent Missed Appointments: In its permanent measurements, proposed for resale only, but included for UNE as well in its interconnection agreement with AT&T (Stacy Performance Aff. Ex. WNS-6).
  • Mean Time to Repair: Proposed for resale, UNE, and trunks.
  • Maintenance Center Speed of Answer: Not proposed in its permanent measurements, but included in its interconnection agreement with AT&T for both resale and UNE.
  • Network Downtime, by network element: Included in its interconnection agreement with Time Warner.
  • Trunking Grade of Service Blocking: Percentages are proposed for CLEC local service trunk group interconnection, BST local service trunk groups, and common transport trunk groups.

Where appropriate, BellSouth will disaggregate maintenance and repair performance measure results into two sub-categories, non-dispatch and dispatch out.

63. The only maintenance performance measure BellSouth has not proposed in its permanent measurements or in any interconnection agreement is:

  • Maintenance OSS Availability.

64. Billing: Billing performance measures measure the timeliness, accuracy, and completeness of end-user billing records and wholesale bills. These are measures of performance adequacy, important because, once provisioned, billing is the most frequent and visible contact an end user has with the provider. Typical billing performance measures include the following:

  • Bill Timeliness: Measures the percentage of end-user and wholesale billing records delivered on time.
  • Bill Accuracy: Measures the percentage of accurate end-user and wholesale billing records.
  • Bill Completeness: Measures the percentage of complete end-user and wholesale billing records.

65. BellSouth has not proposed any billing performance measures in its permanent measurements. However, it includes the following in its interconnection agreement with AT&T:

  • Bill Timeliness
  • Bill Accuracy
  • Bill Completeness
  • Other: Toll and Directory Assistance performance measures measure the speed of response to CLEC customers by BOC operators and speed and accuracy of 911 database updates. They are measures of performance parity. Performance measures include the following:
  • Operator Services Toll Speed of Answer: Measures raw interval in seconds or as a percentage under a set objective.
  • Directory Assistance Speed of Answer: Measures raw interval in seconds or as a percentage under a set objective.
  • 911 Database Update Timeliness and Accuracy: Measures the percentage of missed due dates of 911 database updates and the percentage of accurate updates.

66. BellSouth has not proposed any "Other" performance measures in its permanent measurements or in any interconnection agreements that I have reviewed. However, in its application, BellSouth commits to non-discriminatory access to 911 and E911 services and to maintaining its 911 database for CLECs on the same daily schedule it uses for its own end-user customers. It also commits to non-discriminatory access to Directory Assistance and other Operator Services call completion. (BellSouth Brief at 45)

  1. MARKET PARITY

67. Market parity: Market parity ensures that agreed-to performance measures present appropriate customer group comparisons between the BOC and CLECs. This requires the BOC to provide service to appropriate CLEC customer groups at least equal to that provided equivalent customer groups by its retail or subsidiary units. Customer groups generally fall into two categories: Geographic and Class of Service.

Geographic parity requires that performance measures be identified and measured where a CLEC markets their products. If a CLEC offers service to an entire BOC region, appropriate performance measures would compare CLEC results to total BOC results. If a CLEC offers service to smaller geographic areas, appropriate performance measures would provide comparative BOC results for those areas.

Class of Service parity requires that performance measures be identified and measured for end-user classes of service targeted by a CLEC. For example, if a CLEC targets only small-business customers, appropriate performance standards would provide BOC results for its small-business customers only for comparison purposes.

68. BellSouth proposes the following market disaggregation of its proposed performance measures results data:

  • Geographic: BellSouth proposes to provide results on a company-wide and state-wide basis (Stacy Performance Aff. ¶ 33). The company should also commit to provide results for smaller geographic areas if a CLEC chooses to offer service in those areas.
  • Class of Service: BellSouth proposes to provide results by "type of customer, i.e., consumer, small business, or large business." (Stacy Performance Aff. ¶ 33)
  1. PRODUCT PARITY

69. Product parity: Product parity ensures that agreed-to performance measures present the appropriate comparisons on a product basis between the BOC and CLECs. This requires that the BOC provide service to CLECs at least equal to that provided by its retail or subsidiary units, measured for the products a CLEC offers to end users. Product parity includes two dimensions: (1) interconnection arrangement, and (2) products or product families within those arrangements.

  • Product parity requires that performance measures be identified, measured, and reported for agreed-to interconnection arrangements. This includes both Total Service Resale ("Resale") and Unbundled Network Elements (UNE), including individual elements, element combinations, interim number portability, and platform.
  • Product parity also requires performance measures be identified, measured, and reported for products or product families a CLEC offers to end users. Examples include POTS, Subrate data, HICAP data, Centrex, and ISDN. If a CLEC offers DS1 service to its end users as part of a UNE loop resale arrangement, the BOC would need to provide results for service provided to those customers and for its own DS1 customers.

70. BellSouth proposes the following product disaggregation of its performance measures results data:

  • Interconnection Arrangement: Performance measures are proposed for resale and UNE, although not all measures have been proposed for both. No measures are proposed for total facilities-based CLECs.
  • Products offered to end users: BellSouth proposes to provide results by "type of service provided, i.e., POTS (also referred to as non-designed), and designed or special services" (Stacy Performance Aff. ¶ 33). BellSouth should further commit to provide results for any specific product a CLEC chooses to provide end users in South Carolina..
  1. REPORTING REQUIREMENTS

71. Reporting requirements should ensure that performance measures are reported in a way that will allow CLECs and regulators to identify whether parity and adequacy have been achieved. Dimensions include (1) availability of data, (2) entities compared, (3) report frequency, (4) report accuracy, and (5) report format.

  • Availability of Data: Relates to the availability of partitioned BOC databases that allow CLECs to access performance measure results when and how they require it.
  • Entities Compared: Appropriateness of results comparisons relate to the entities for which the data will be provided: BOC retail? BOC subsidiaries? the CLEC? all CLECs? other?
  • Report Frequency: Report frequency relates to how often reports will be provided.
  • Report Accuracy: Report accuracy and completeness relates to the statistical validity of the proposed data.
  • Report Format: Report format relates to how performance standard results are presented. Are they presented in tabular or graphical form? Are they readable and understandable? Can a CLEC or regulator determine whether parity has been achieved? Have control limits been defined? How many standard deviations does the control limit represent? How many months of data are presented? Can trends be detected? How is result seasonality handled?

    BellSouth proposes the following performance measure report parameters:

  • Availability of Data: BellSouth has implemented a data warehouse that will allow CLECs access to performance measure results and raw data (Stacy Performance Aff. ¶¶ 13-15). This is an outstanding advance in creating an environment where CLECs are not dependant on ILECs for the production of performance measure reports. BellSouth commits to provide access to all measurements described in Stacy's affidavit (Stacy Performance Aff. ¶ 15).
  • Entities Compared: BellSouth proposes to provide "performance for CLECs in South Carolina, for all CLECs in BST's nine state region, and comparable total data for all of BST's retail customers." They also have included data for BST in South Carolina only (Stacy Performance Aff. ¶ 20). Although it is not clear in the application, I have assumed that "CLECs in South Carolina" includes results for individual CLECs. This is implied in its interconnection agreement with AT&T: "enable AT&T to compare BellSouth's performance for itself with respect to a specific measure to BellSouth's performance for AT&T for that same specific measure" (Stacy Performance Aff. Ex. WNS-4 ¶ 1.2).
  • Report Frequency: Although the data warehouse will allow CLECs access to raw data at any time, BellSouth generally proposes to provide performance measure reports on a monthly basis.
  • Report Format: BellSouth proposes to use statistical process control (SPC) to determine whether services are being provided at parity. Once enough historical data is collected, BellSouth will establish upper and lower levels of performance. Although BellSouth proposes SPC for parity measures, I have assumed, for purposes of this affidavit, that similar methodology will be used for adequacy measures where a "meaningful opportunity to compete" standard is used. BellSouth proposes that monthly variances in results will not be of any concern unless a CLEC is higher or lower than BST for three consecutive months or falls outside of the control limit in any one month. Should this occur, BellSouth commits to performing a "root cause analysis" to determine the reason for the variation.

    SPC is an accepted method to reveal more than nominal variation in one-entity process results over time. Using SPC as a determinant of parity between two or more entities is less clear. BellSouth and individual CLECs should negotiate an agreement as to what constitutes parity given the data that BellSouth has agreed to produce. For example: Does three standard deviations constitute the right range for being "in control"? Does being "in control" automatically mean that two entities are at parity?

VI. CONCLUSIONS

72. BellSouth clearly has committed to provide service to its CLEC customers in a non- discriminatory manner. It further commits to collecting all the necessary data and providing reports to demonstrate parity or adequacy of results.

73. BellSouth proposes a robust set of performance measures for the maintenance and repair process, but less robust measures for provisioning and ordering. No measures are proposed for pre-ordering or billing (although billing measures are included in its interconnection agreement with AT&T).

74. BellSouth's proposed market and product data disaggregation and their proposed performance measure reports and data availability are excellent.

75. Specific performance measures BellSouth should be required to provide include the following. "Include as an ongoing measurement" refers to performance measures included in interconnection agreements but not proposed as a permanent measurement. Critical measures are in italics, and bold face indicates additional emphasis:

  • Pre-order OSS Availability
  • Pre-order System Response Times—Five key functions
  • Firm Order Confirmation Cycle Time: Complete state-specific development
  • Reject Cycle Time: Complete state-specific development
  • Total Service Order Cycle Time
  • Service Order Quality: One or more suggested measures
  • Ordering OSS Availability
  • Speed of Answer—Ordering Center
  • Average Service Provisioning Interval
  • Percent Service Provisioned Out of Interval: Include as an ongoing measurement
  • Port Availability
  • Completed Order Accuracy
  • Orders Held for Facilities
  • Out of Service Over 24 Hours for UNE
  • Repair Missed Appointment for UNE: Include as an ongoing measurement
  • Maintenance OSS Availability
  • Billing Timeliness: Include as an ongoing measurement
  • Billing Accuracy: Include as an ongoing measurement
  • Billing Completeness: Include as an ongoing measurement
  • Operator Services Toll Speed of Answer
  • Directory Assistance Speed of Answer
  • 911 Database Update Timeliness and Accuracy

76. On the basis of the above shortfall, I conclude that BellSouth has not provided sufficient performance measures in its application to make a determination of parity or adequacy in the provision of resale or UNE products and services to CLECs in the state of South Carolina.

Updated June 25, 2015