Home Q & A after demo of the HCS vLab at the Annual meeting of the Australian Linguistics Society (ALS) 2013
Post
Cancel

Q & A after demo of the HCS vLab at the Annual meeting of the Australian Linguistics Society (ALS) 2013

This page presents the answers to questions raised during the presentation on the HCS vLab at the Annual meeting of the Australian Linguistics Society (ALS 2013) in Melbourne on Friday 04/10/2013.

1) Is it possible to search by audio type and data types, e.g. sentences, words? For instance, in the Mitchell and Delbridge data, that information is in the file names for the original data.

Answer: We could simply search the item name (the item name comes from the filename) to search by audio type and data type. This type of searching is not currently possible, but general metadata search functionality is being built into the system. Someone who knows more about each of the data sources could also help us improve the ingestion of metadata for that source.

2) Is it possible to use the search box function on the main page to search the metadata fields (e.g. location of recording, or origin of speaker) not just the Item text contents.

Answer: We are currently developing this functionality.

3) Is there funding available to support researchers interested in contributing legacy data to the HCS vLab? For instance, people working on Australian languages might submit their data to PARADISEC and the data could enter the HCS vLab indirectly that way.

Answer: There is currently no funding available, but we will set up a process for taking data, clean it and ingest it and this process will be documented with the final release of the HCS vLab. Submitting data to PARADISEC was indeed the path envisaged for Australian languages data, but it means we need to put in place a way to regularly update the PARADISEC collection ingested in the HCS vLab. This may be put in place as part of Phase II (i.e. after 01/07/14).

3) Is it possible to change the name of an Item List in the Discovery Interface, not just in Galaxy?

Answer: We will add support for renaming of item lists.

4) There were browser issues with viewing EOPAS in earlier versions of the HCS vLab.

Answer: The issues should be resolved in the new version of the HCS vLab.

5) Will Praat be available in the HCS vLab?

Answer: Praat is not part of the set of tools slated for Phase I of the HCS vLab project, but we agree it would be good if we could find a way to include it. We are keeping a list of tools people have said they want and which we will consider for inclusion in Phase II (from July 2014). For now, users can download data files and use them in Praat. If users need to add the annotation files, we could add support for converting annotations to Praat format, or write a widget converting JSON-LD (our current format) to Praat format.

6) Could ultrasound and EEG (any electronic data, really) be put into the HCS vLab and then be available for analysis there?

Answer: This is something we would like to have and which should already be possible as there shouldn’t be anything special about these files which would prevent them from being added.

7) Will ELAN be included in the tools? Many linguists use ELAN, e.g. with video for sign language research.

Answer: We already have some ELAN annotations in the EOPAS datasets, so to some extent we are supporting it.

8) For linguists who work with historical sources (e.g. manuscripts and colonial letters), can we have PDF scan of the original source rather than typed up version as ‘Primary Data’?

Answer: We agree that they shouldn’t be considered “Primary Data” but the typed versions of the files are listed as “Original” by the collection creators. There are some PDF files in PARADISEC and it would be possible for researchers to add PDF scans to any of the AusNC collections.

9) Is there a vLab FAQ page we can add the questions and their answers.

Answer: There is now! We will add to it as more questions come in.

10) Is there an HCS vLab mailing list?

Answer: Sorry, not yet. But watch this space.

This post is licensed under CC BY 4.0 by the author.