代写 html assembly AI computer architecture software Department of Electrical & Computer Engineering University of California, Davis

Department of Electrical & Computer Engineering University of California, Davis
EEC 170 – Computer Architecture Winter Quarter 2019
Laboratory Exercise 2: RISC-V Computer Arithmetic
Due Date: Feb 15, 2019 Full Points 150
Objectives
The objectives of this exercise are four fold.
a) You will learn how RISC-V assembly programming, especially the use of shift and logical operations like and, or.
b) You will learn how to emulate floating point operations in software if the hardware does not support them
c) You will learn about a new floating point standard called bfloat16 that is being increasingly used in machine learning applications where we need a huge range but the precision can be lower.
d) Finally, this exercise will prepare you for the midterm exam, as you would have mastered chapter 2 and chapter 3 with a successful implementation of Lab 2.
Background
The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format with the intent of accelerating machine learning and near-sensor computing.[1] It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs,[2][3][4]Google Cloud TPUs,[5][6][7] and TensorFlow.[7][8]
Description of the Project
Your processor does not have a floating point hardware. But, your application requires floating point arithmetic. Your task is to implement an efficient function to realize floating point addition and floating point addition. You will assume the bfloat16 representation. This representation is identical to the IEEE-784 standard discussed in the class and the book, except that the mantissa is only 7 bits, so the entire representation fits in 16 bits.
What you need to do?
1. Write RISC-V assembly code and verify using the Venus RISC-V simulator.
2. You are given a skeleton file that consists declarations for inputs x and y and outputz,andoperationtobeperformed. If(operation=0)z=x+yelsez=x–y

3. z should be a valid bfloat16 representation of the answer, which means it should be normalized. Inputs x and y are also normalized bfloat16 representations.
4. Input and output is done in decimal integer representation. So, the bfloat16 representations of x and y are input to your program as decimal integers and your output should also be a decimal integer corresponding to the bfloat16 representation of z.
5. You can ignore rounding, overflow, and underflow, but make sure test cases where one of the operand is a zero.
6. TA will provide additional test cases.
Useful Resources on the Web
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format https://oletus.github.io/float16-simulator.js/ https://www.rapidtables.com/convert/number/binary-to-decimal.html https://www.binaryhexconverter.com/decimal-to-binary-converter Skeleton File and Sample Results
.data
x: .word 16192
y: .word 16544
z: .word 0 operation: .word 1
.text
la t0, operation lw t0, 0(t0)
la t1, x
la t2, y
lw t1, 0(t1)
lw t2, 0(t2)
# 0.75= 16192
# 5 = 16544
# expected result 0.75-5= -4.25 which is 49312 if operation is 1
#0 is add, 1 is subtract
# t0 contains the flag whether to add or subtract
# t1 has x in bfloat16 representation # t2 has y in bfloat16 represenation
#your program starts here
#store the final result in z
#print the result as z as decimal integer and exit
print:
#exit
addi a0, x0, 10 ecall